Kakoty N. Introduction to Embedded Systems and Robotics. A Practical Guide 2024
Kakoty N. Introduction to Embedded Systems and Robotics. A Practical Guide 2024
Kakoty
Rupam Goswami
Ramana Vinjamuri
Introduction
to Embedded
Systems and
Robotics
A Practical Guide
Introduction to Embedded Systems and Robotics
Nayan M. Kakoty • Rupam Goswami
Ramana Vinjamuri
Introduction to Embedded
Systems and Robotics
A Practical Guide
Nayan M. Kakoty Rupam Goswami
School of Engineering School of Engineering
Tezpur University Tezpur University
Tezpur, Assam, India Tezpur, Assam, India
Ramana Vinjamuri
University of Maryland Baltimore County
Ellicott City, NJ, USA
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG 2024
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by
similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
The study and development of robots can be traced back to the sketches in the 1495’s
notebook of Leonardo da Vinci. The first industrial robot by UNIMATION was
introduced in 1950 for operations such as lifting the pieces of metal from die-casting
machines. For obvious advantages of using robots, its applications expanded to
manufacturing industry, entertainment, healthcare technology, extra-terrestrial
space exploration, coaching in games, and military applications since the beginning
of twenty-first century. Gradual increase of robot applications has necessitated them
to be more user-friendly, and the days are not far when assistive robotics will be part
of everyone’s life during daily living activities.
For easy understanding of the core domains of engineering and technology
involve in robotics, this book has been prepared systematically to introduce the
fundamental concepts of embedded systems and robotics. The embedded systems,
an integral part of a robot, act like the brain of it which process the information and
command to control to robot activities. An embedded system acquires the informa-
tion about the robot’s surroundings through sensory inputs and command the
actuators to perform motions or movements accordingly.
The readers will be able to grasp the physics and technical information necessary
to start robot development while reading the chapters. This will be facilitated
through the lens of ABC for robotics. In brief, the ABC pedagogy, i.e., Acquire,
Build, and Create, of the book aims to garner interest in the readers through practical
examples. The readers are expected to first Acquire the basic information about
embedded systems and robotics. Thereafter, they shall be equipped with knowledge
to Build different circuits, systems, and sub-systems based on the acquired informa-
tion. Finally, they shall be ready to Create simple solutions to practical problems
with the gained concepts.
This book is devised to train the reader realizing the textbook concepts, which can
bring changes in a physical world. An embedded system is one of the easier and
low-cost media, which allows us to see the changes in the environment because of
the programming techniques that one learns in regular courses. To start in this
direction, this book introduces the basic concepts of an embedded system to the
reader. The citation of academic projects on robotics is expected to thrill the readers
v
vi Preface
vii
viii Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Chapter 1
Introduction to Embedded Systems
This chapter aims to familiarize the reader with basics of embedded systems and its
state-of-art applications. The fundamental components and applications of embed-
ded systems are diagrammatically represented in this chapter. An effort is made to
create a visual space in the readers’ mind pertaining to the promising future orien-
tations of embedded systems. After reading this chapter, the reader is expected to
acquire the following knowledge:
1. Definition of an embedded system
2. Classification of embedded system
3. Basic components of an embedded system
4. Applications and examples of embedded system
There are three main components of an embedded system: hardware, software, and
real-time operating system (RTOS). Three specific categories of functions of these
three components are (a) reading the input or command from the outside world;
(b) processing the information; and (c) generating necessary signal as output for
bringing changes in the environment. Figure 1.1 shows the basic components of an
embedded system.
The CPU is responsible for processing the system inputs and taking decisions which
guide the system operation by executing the software instructions. It is the main
control unit of the system. The CPU in most embedded systems is either a micro-
processor or a microcontroller, but it can also be a digital signal processor (DSP),
complex instruction set computer (CISC) processor, reduced instruction set com-
puter (RISC) processor, or an advanced RISC machine (ARM) processor depending
on the application of the system.
1.3.1.2 Memory
The memory component is responsible for storing program code and data necessary
for system operation. The section of memory which stores information permanently
is called non-volatile memory. Read-only memory (ROM) is an example of
non-volatile memory that stores data in it even after the electrical power to the
system is switched off. Depending on the fabrication, erasing, and programming
techniques, non-volatile memories are divided into programmable ROM (PROM),
FLASH, erasable PROM (EPROM), or electrically EPROM (EEPROM) types.
The section of memory that stores information temporarily and loses its contents
when the electrical power is switched off is called volatile memory. Random access
memory (RAM) is a volatile memory. It is the main working memory of the
controller/processor where the information can be directly accessed from a memory
location. RAM is further categorized as Static RAM (SRAM) and Dynamic RAM
(DRAM) based on the technology of storing the data. The data is stored as voltage in
SRAM and as charge in DRAM. DRAM is slower and inexpensive than SRAM as it
needs to be dynamically refreshed all the time.
1.3 Basic Components of an Embedded System 5
Solid state memory drives or commonly called SSDs are a type of computer
storage device that uses flash memory to store the data electronically in non-volatile
memory chips. The faster computation in SSDs makes them efficient for expanding
data handling by embedded systems.
For communication of information in between the embedded system and the external
world, two types of input-output ports, communication ports and user interface ports
are used.
The ports that are used for serial and/or parallel exchange of information with
other devices or systems are categorized as communication ports. USB ports, printer
ports, wireless RF, and infrared ports are examples of input-output communication
ports. The functionality of these ports is defined specifically with respect to embed-
ded systems.
The input-output ports that are used for exchange of information with the user are
called user interface ports. Input-output ports connected to the keyboards, switches,
buzzers and audio, lights, numeric, alphanumeric, and graphic displays are under
this category.
The various tools that are generally used by a developer to develop an embedded
software systems include an editor, compiler, assembler, debugger, and simulator.
These tools are present in an integrated development environment (IDE). The
software is written either in a high-level language like embedded C, embedded C+
+, JavaScript, Python or in an assembly language specific to the target controller or
processor.
Embedded systems play a significant role in our daily living activities starting from
home to workplaces, playground to healthcare, entertainment to e-commerce, and
travel to security systems. A schematic of embedded system applications is
presented in Fig. 1.3 and a few examples are cited below:
Embedded systems in household: Applications of embedded systems in household
appliances include microwave ovens, air conditioners, washing machines, refrig-
erators, dish-washers, and home automation.
Embedded systems in workplace: Applications of embedded systems in the work-
place include routers, firewalls, switches, network cards, smart card readers,
printers, and fax machines.
8 1 Introduction to Embedded Systems
trained with personal profiles is of the need. With these aspects, the future of
embedded systems in conjunction with robotics can be envisaged to be more user-
friendly and significant. From the societal viewpoint, embedded systems will be
finding more applications focused on improving the quality of life at low cost
through the automation of processes. The following sections brief some of the
specific technologies that will dominantly find use of embedded systems in the
near future.
Internet of Things (IoT) is based on the concepts of algorithm (A), big data (B), and
computational power (C), which, in short, may be called the ABC concept. It
attempts to connect everything and everybody at any time present anywhere over
the internet. With the ability of sensor technology to sense each and every activity of
day-to-day life, the sensor networks are leading to generation of big data (B).
Advances in algorithms (A) used for understanding the data and increase in com-
putational power (C) with the advancement in microelectronics have enabled the
decision-making process on chip in embedded systems.
The embedded systems are supported by powerful computation and fast communi-
cation aimed at integrating the physical and the cyber worlds. Cyber physical
embedded systems in robotics can be used for precision-based tasks such as in the
implementation of robotic arms for medical surgery, exploration of extra-terrestrial
domains, and security in biological war. Design of cyber physical embedded systems
is challenging considering the issues related to privacy and flexibility due to their
high level of complexity.
efficiency. The future of context awareness embedded systems holds promises for
applications in a larger market.
The term “robot” first appeared in the play “Rossum’s Universal Robot (R.U.R.)” by
the Czech writer, Karel Capek, and has been in use since 1921. It originated from the
Czech word “robota” which means “forced labor.” The first robot was an industrial
robot designed and patented by J. Engelberger and George C. Devol in 1954. They
started Unimation Robotics Company during 1958 and manufactured the commer-
cial version known as Unimate. This robot was first used in the automobile company
General Motors for automation of die-casting and spot-welding operations.
Figure 2.1 shows a brief schematic representation of the history of robotics. Inclu-
sion criteria for the events of the history of robotics brief history are as follows:
1. Coining of Key Terminology: Events where key terms and concepts in robotics
were first introduced.
2. Technological Firsts and Innovations: The first occurrences of significant tech-
nological advancements in robotics.
3. Pioneering Autonomous Systems: Key developments in autonomous robotic
systems.
Asimov’s three laws of robotics, which were shaped in the literary work of Isaac
Asimov (1920–1992), define a crucial code of behavior that fictional autonomous
robots must obey as a condition for their integration into human society. They are
known as the “Three Laws of Robotics”:
1. A robot may not injure a human being, or, through inaction, allow a human being
to come to harm.
2. A robot must obey the orders given to it by human beings except where such
orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.
Although efforts have been made to follow the laws of robotics, there are no
involuntary methods of implementing them.
2.3 Terminologies and Basic Concepts 13
Workspace: The volume in space that a robot’s end-effector can reach, both in
position and in orientation, is known as workspace. In Fig. 2.8, the volume
enclosed by the lines surrounding the robotic arm shows its workspace.
2.3 Terminologies and Basic Concepts 15
Actuator: Actuator of a robotic system is the component that brings the changes in
the pose of other components in the system. The typical energy sources for these
changes are electric, hydraulic, fluid pressure, pneumatic pressure, etc.
Sensor: Sensor is a device or a module that converts the changes of a physical
parameter or phenomenon into an equivalent electrical quantity.
Co-robot: Co-robot or collaborative robots are robots that interact with humans with
close proximity to each other or in a commonly shared work space.
Since inception, robots have been in practice for their inherent advantages in terms of
accuracy, safety, precision, and robustness. Robots can be categorized under the
following categories depending on their type of application:
2.4.1. Industrial robot: Industrial robots are the ones deployed for industrialized
manufacturing and shipment of goods. Industrial robots can be either in
manufacturing or in logistic applications.
Manufacturing robots: These robots perform programmed repetitive tasks in
a customized and well-defined environment such as in an assembly line in
a factory. The commonly used manufacturing robots are articulated arms
or robotic arms that are specifically created for operations like material
handling, painting, welding, assembling, disassembling, picking, and plac-
ing. KUKA robot is one of the most popularly used manufacturing indus-
trial robots.
Logistic robots: Robots finding applications for storage and transportation of
goods in industries are logistics robots. Because of the inherent advantages
of these robots and increasing need of rapid parcel shipments in
e-commerce industries, use of logistic robots is increasing every day.
These robots are mostly mobile and automated vehicles that operate in
warehouses and storage facilities to transport goods.
2.4 Applications of Robots 17
2.4.2. Service robots: Robots that provide assistance to humans in their day-to-day
activities including domestic, office, space exploration, and hospitals are
under the category of service robots.
Domestic or household robots: These service robots are employed at home
for doing household tasks with ease, e.g., assistive robots for elderly
people and floor-cleaning robots.
Medical robots: Medical robots are service robots employed in hospitals and
medical facilities. Medical robots need to be extremely accurate and
precise in their operations, e.g., surgical robots like the da Vinci surgical
system and rehabilitation robots like prosthetic limbs.
Military robots: Robots finding applications in military service fall under this
category. This category of robots needs to be very robust in their opera-
tions irrespective of their work space, e.g., autonomous or remote-
controlled bomb discarding robots, military drones, and underwater
vehicles.
Entertainment robots: Entertainment robots are the type of service robots
used for enrichment of intelligent quotient through fun, e.g., Aibo—a
robotic dog, humanoid robots like QRIO and RoboSapien.
Space robots: These types of robots are deployed for extra-terrestrial space
exploration and to assist astronauts in space stations, e.g., Mars Rover
Curiosity and humanoid robot like Robonaut.
Educational robots: Robots used for realization of text-book concepts
through visualization of changes in the physical world belong to this
category, e.g., Lego Mindstorms robotic systems, Vex robotic system
design, and Thymio. Software tools like RoboAnalyzer (developed in
the Indian Institute of Technology, Delhi) and GraspIt! (developed in the
Columbia University) are also used for educational purposes. There also
exist educational robots that are used to carry out teaching functions, e.g.,
ABii robot (developed by VAN Robotics), NAO robot (developed by
SoftBank Robotics), and EMYS (developed by Flash Robotics).
Chapter 3
Sensors, Actuators, and Circuits in Robotics
There are a number of sensors like proximity, infrared, temperature, and pressure
that one encounter often during daily living. Human body is equipped with five
different types of sensors: eyes that detect light energy, ears that detect acoustic
energy, tongue and nose that detect certain chemicals, and skin that detects pressure
and temperatures. The eyes, ears, tongue, nose, and skin receive these variables from
the environment and transmit signals to the brain which controls the response. For
example, when you touch a hot plate, it is your brain that informs you it is hot and,
therefore, you take your hands off it. In this case, the skin works as the sensor, brain
as the controller, and hand as the actuator.
This chapter describes the basic concepts of sensors, actuators, and controller
circuits used in robotics. On completion of this chapter, readers are expected to have
gained knowledge on the following:
1. Sensors and actuators used in robotics
2. Skills to implement circuits used in robotics
Proprioceptive sensors measure values that are internal to the system (or robot); for
example, an encoder reads the speed of a motor, a potentiometric sensor senses the
joint angles in a robotic arm, and a current sensor senses the electrical charge status
in a battery.
Exteroceptive sensors acquire information from the robot’s environment; for
example, infrared or ultrasonic sensors sense the presence of obstacles in the robot’s
path, gas sensors sense the chemical composition of gases in the robot’s environ-
ment, and pressure sensors measure the pressure experienced by a prosthetic hand.
Active infrared sensors or proximity sensors can emit as well as detect infrared
radiation. When an object is in proximity of the sensor, a beam of infrared light
emitted from the sensor’s LED is reflected back from the object and gets detected by
the receiver. It is commonly used in robots for obstacle detection.
3.3 Sensors Used in Robotics 21
Passive infrared (PIR) sensor can detect infrared radiation but cannot emit it. When
an object in motion that emits infrared radiation comes under the sensing range of the
sensor, two pyroelectric elements present inside the sensor measure the difference in
the amount of infrared radiation levels between them leading to a change in the
output voltage triggering the detection. PIR sensors are mostly used for detecting
presence of humans. The basic element of a PIR sensor is a P-N junction which
works based on the principle of recombination of electron-hole pairs (EHPs).
Infrared light radiation from an object onto the P-N junction impacts the recombi-
nation near the depletion region as presented in Fig. 3.1.
An ultrasonic sensor is used for measuring the distance or detecting a target object in
the surrounding environment. It comprises of a transmitter, which transmits ultra-
sonic sound waves to the surrounding, and a receiver, which receives the reflected
sound wave from an object. Ultrasonic sensors are independent of ambient light.
A light sensor detects light and converts it into an electrical signal. The light energy
may be in the part of either visible or infrared light spectrum. The light sensor
generates a voltage difference corresponding to the light intensity. The two com-
monly used light sensors in robotics are light-dependent resistors and photovoltaic
cells.
A color sensor detects the color of an object and converts into a frequency which is
proportional to the intensity of the color. It is used for making color-sorting robots to
distinguish different colors.
A rotary encoder is a sensing device that detects position and speed, and converts the
angular motion or position of a shaft or axle into an electrical signal. The electrical
signal can be either analog or digital. It is used in robotics to serve as a feedback
system for position and speed control (Fig. 3.2).
3.3.6 Accelerometer
A touch sensor gets triggered on by being in physical contact with other objects. It
works based on capacitive sensing principle. Touch sensors are commonly used in
the development of prosthetic hands and industrial grippers.
A pressure sensor measures pressure using resistive sensing. It converts the force
applied on the sensing area of the sensor into an electrical energy in the form of
voltage. Most pressure sensors work on the principle of piezo-resistive sensing.
A thermal sensor detects any changes in the surrounding temperature, and generates
a voltage difference corresponding to the temperature change. It functions based on
resistive sensing principle. LM34, LM35, TMP35, TMP36, and TMP37 are a few
frequently used thermal sensors.
An emotion sensor interprets human facial expressions. It is used to build robots like
humans, which require detection of emotions, e.g., B5T HVC face detection sensor.
A sound sensor detects sound waves from the environment and converts them into an
electrical energy in the form of voltage that is proportional to the sound level. The
generated voltage difference by the sensor being minimal requires further amplifi-
cation to produce a measurable voltage change. Sound sensors are usually used in
robots to recognize speech or to develop simple clap-based robots, e.g., microphone
sound sensor.
A water flow sensor measures flow rate of water in a channel. It consists of a water
rotor and a hall effect sensor that generates an electrical pulse on every revolution of
the rotor measuring how much water has flown through it. It is used in irrigation
robots, e.g., YFS201 water flow sensor.
A rain drop sensor detects waterdrops, and converts the change in resistance
caused due to the raindrops fallen on the sensor into a voltage. It is used in robots
employed for monitoring weather conditions, e.g., rain weather sensor module.
A gas sensor is used to sense different types of gases and measure their concentra-
tions. It detects the gas and converts the corresponding resistance change inside the
filament of the sensor into a voltage signal. It is used in gas detection robots for
sensing toxic and flammable gases, e.g., MQ2 smoke sensor, MQ5 CO2 sensor.
3.3.16 Biosensors
Biosensors are innovative analytical devices that detect the presence or concentra-
tion of biomolecules called analytes. The input analytes react with a biological
sensing membrane. On reaction with analytes, the membrane undergoes changes
in its electrical parameters and thereby quantifies using a transducer. This electrical
signal is further analyzed in a signal processing unit. Applications of biosensors are
commonly found in medical sciences and health care. In robotics, biosensors are
mostly used in assistive robotics and in man-machine interfacing. More applications
of biosensors for military robots are under research.
26 3 Sensors, Actuators, and Circuits in Robotics
Accuracy: It is defined as the difference between measured value and true value. The
error in measurement is specified in terms of accuracy. It is defined in terms of %
of full scale or % of reading.
Precision: It refers to the closeness of output values with which the sensor can
measure a physical quantity. For e.g., a high-precision pressure sensor will give
very similar readings for repeated measurements of the same pressure, say,
100.2 kPa, 100.3 kPa, and 100.2 kPa.
Resolution: It is the minimum change in input that can be sensed by the sensor. For
e.g., if resolution of a pressure sensor is 0.01 kPa, it can detect pressure changes
as small as 0.01 kPa.
Sensitivity: It is defined as the change in output response to change in input response
of the sensor. For e.g., a sensitive pressure sensor will produce a noticeable
electrical signal change for a small pressure variation, say 0.01 kPa, indicating
its high sensitivity to small changes.
Linearity: Linearity is the maximum deviation between the measured values of a
sensor from ideal value or relationship between input and output signal varia-
tions. The linearity of a sensor indicates how closely it adheres to ideal behavior.
Repeatability: It is defined as the ability of a sensor to produce the same output every
time when the same input is applied, and all the physical and measurement
conditions are kept the same including the instrument and ambient conditions.
Repeatability ensures that the same sensor, under the same conditions, produces
consistent and reliable readings.
Reproducibility: It is defined as the ability of sensor to produce the same output when
same input is applied. Reproducibility confirms whether an entire sensor can be
reproduced in its entirety.
Range: Difference between the minimum and maximum output provided by a
sensor.
Response Time: It expresses the time at which the output reaches a certain percentage
(for instance, 95%) of its final value in response to a step change of the input.
Saturation: It is defined as the state in which the limiting value of the sensor range
becomes the output value of the sensor. This happens when the quantity to be
measured by the sensor is larger than the dynamic range of the sensor.
To make something move, we need to apply a force or a torque on it. Actuators are
the generators of the forces or the torques that robots employ to move themselves
and other objects. Actuators are the muscles of a robot. All actuators are energy-
consuming mechanisms that convert various forms of energy into mechanical work.
3.7 Some Actuators Used in Robotics 27
The mechanical linkages and joints of the robot manipulator are driven by
actuators which may be pneumatic or hydraulic or electric. These actuators may
connect directly to the mechanical joints or may drive mechanical elements indi-
rectly through gears, chains, wires, tapes, or ball screws.
There are basically two types of actuators in robotics based on the motion produced:
Linear actuators produce linear motion, i.e., motion along one straight line of a
robotic link or a joint with respect to its adjacent links or joints. DC linear actuator,
solenoids, muscle wire, pneumatic, and hydraulic cylinders are linear actuators.
Linear actuators are mainly specified by three parameters: the minimum and the
maximum distance that the joint or link can move, force required for the movement,
and the speed of movement.
3.7.1 AC Motor
(Lorenz law) which rotates the motor. A commonly used AC motor is a three-phase
induction motor. AC motors are mostly used in industrial robotics for high torque
applications. Figure 3.5 represents schematics of an AC motor.
3.7.2 DC Motor
A DC motor is a type of rotational actuator that converts direct current (DC) into a
mechanical power. It operates on the principle of electromagnetic induction which
states that when a current-carrying conductor is placed in a magnetic field, it
experiences a mechanical force. The angular motion of rotating shaft of the motor
is measured using encoders or potentiometers. DC motors are commonly used
actuators in robotic applications such as in wheels of a robot.
A stepper motor is a type of rotational actuator that rotates in small angular steps. It
works on the principle of electromagnetism. A stepper motor consists of a stationary
part called stator and a moving part called rotor. When current flows in the coils of
the stator by energizing one or more of the stator phases, a rotating magnetic field is
developed and the rotor which is a permanent magnet gets aligned with the direction
of the generated field. As a result, the rotor starts rotating with the rotating magnetic
field in steps by a fixed number of degrees to finally achieve the desired position.
Stepper motors are used in robotic applications where discrete steps or angles of
orientation are required. Figure 3.7 shows the schematic of a stepper motor.
set of gears, and a lead screw. The concept behind the working of a DC linear
actuator is of an inclined plane where the lead screw serves as a ramp generating a
small rotational force that acts along a larger distance to move the load. DC linear
actuators are added with a linear potentiometer for providing a linear position
feedback. It is used in robotic applications for lifting and tilting of objects or
machines. A schematic of a DC linear motor is shown in Fig. 3.8.
A pneumatic actuator is a type of actuator that produces rotational motion and linear
motion by using compressed air. It comprises of a piston, cylinder, and a valve or
a port. When compressed air enters into the cylinder through the valve, pressure
builds up inside the cylinder which results in either a controlled linear motion or a
rotary motion of the piston. It is used in automation industries. A schematic of
a pneumatic actuator is shown in Fig. 3.9.
3.7 Some Actuators Used in Robotics 31
The general electronic circuits used in robotics vary depending upon the specific
application of the robot. However, some of the very commonly used circuits are
discussed in the following sections.
It is one of the critical components of any robotic system for its proper functioning.
This type of circuits acts as the energy source for all the units in a robotic system. It is
used to convert electrical power from a source to the required voltage, current, and
frequency as per the specifications of the robot’s components. Accurate power
supply is the first criterion for any of the units in a robotic system to work properly.
All power supplies have a power input connection, which receives energy in the
form of electric current from a source, and one or more power output connections
which deliver current to the robot’s components.
The source of power may come from an electric power grid, such as electrical
outlets or energy storage devices which include batteries, fuel cells, generators, and
solar cells. The input and output are usually hardwired circuit connections. Some
power supplies are separate standalone pieces of equipment, while others are built
into the load appliances that they power. Specific circuits for efficient power
management are part of power supply circuits.
amplified version of its input signal. The primary objective of an amplifier circuit is
to boost the current or voltage in any circuit at any stage. It is also called buffer
circuit in some applications. Buffer circuits are mainly used where a signal has quite
low current input which needs to be increased for maintaining the same voltage
levels. They draw current from power source and add it to the signal.
Chapter 4
Microcontrollers in Robotics
Read-Only Read-Write
Timer
Memory Memory
Serial
Central
Input-Output port Communication
Processing Unit
port
Microcontrollers are classified on the basis of their number of bits, i.e., width of the
data bus, memory device, instruction set, and memory architecture.
(i) According to the size of data bus or number of bits:
Microcontrollers are classified as 8-bit, 16-bit, and 32-bit based on the size
of data bus, i.e., number of communication lines present in it. 8-bit
microcontrollers are able to manipulate 8-bit data ranging from 0×00-0xFF
(2^8 = 255 numbers) during every clock cycle. The examples of 8-bit
microcontrollers are Intel 8031/8051, PIC1x, and Motorola MC68HC11
families.
16-bit microcontrollers perform with greater precision and performance as
compared to 8-bit microcontrollers. 16-bit microcontrollers have 16-bit
data width with a range of 0x0000-0xFFFF (2^16 = 65535 numbers).
Some examples of 16-bit microcontrollers are 8051XA, PIC2x, Intel 8096,
and Motorola MC68HC12 families.
4.1 What Is a Microcontroller? 37
32-bit microcontrollers use 32-bit data bus and can communicate 32-bits in
parallel. This results in much faster operations and higher precision com-
pared to 8-bit or 16-bit microcontrollers. 32-bit microcontrollers are able to
manipulate 32-bit data ranging from 0×00000000-0xFFFFFFFF (2^32 =
4294967295) during every clock cycle. Some examples are Intel/Atmel
251 family, PIC3x.
(ii) According to memory devices:
Microcontrollers are divided into two types based on intrinsic and extrinsic
memory as embedded memory microcontrollers and external memory
microcontrollers, respectively.
Embedded memory microcontroller: A microcontroller having all the func-
tional blocks on a single chip is called an embedded microcontroller. For
example, the 8051 microcontroller contains program and data memory,
input-output ports, serial communication port, counters, timers, and inter-
rupts in one chip.
External memory microcontroller: A microcontroller not having all the func-
tional blocks available on a single chip is called an external memory
microcontroller. For example, the 8031 has no program memory on the
chip; therefore, it is an external memory microcontroller.
(iii) According to instruction set:
The instructions or commands used in microcontroller programming are
basically of two types: Reduced Instruction Set Computer (RISC) and Complex
Instruction Set Computer (CISC).
RISC: It allows each instruction to operate on any register or use any addressing
mode with simultaneous access of program and data. A RISC system
reduces the execution time by decreasing the number of clock cycles per
instruction.
CISC: It allows the programmer to use only one instruction in place of a number
of simpler instructions. A CISC system reduces the execution time by
decreasing the number of instructions per program.
(iv) According to memory architecture:
The process of instruction and data exchange between the memories inside
the microcontroller depends on the architecture of memory mapping. Based on
this architecture, microcontrollers are classified into two categories: Harvard
memory architecture and Von Neumann memory architecture.
Harvard architecture: Microcontrollers based on this architecture have sepa-
rate buses and different memory units for instructions and data. This allows
simultaneous fetching of data and instructions as they are stored in different
memory locations. Harvard memory architecture-based microcontrollers
like PIC-microcontrollers are faster than Von Neumann memory architec-
ture microcontrollers.
38 4 Microcontrollers in Robotics
Two of the important factors that may be considered for selecting a programming
language to program a microcontroller are size and speed.
Size: The memory that the program occupies is very important as microcontrollers
have limited memory.
Speed: The program execution is always desired to be fast in most of the applica-
tions. Programming languages with minimum number of machine cycles to
decode are more preferable for faster operations.
One of the most commonly used programming languages for programming
microcontrollers is Embedded C.
Data types represent the nature of data to be used. Including the data types used for C
programming, Embedded C uses a few more extra data types. The data types used in
embedded C programming on a 32-bit machine are given in Table 4.1. The size and
the range are different on machines with different word sizes.
Constants are values that remain fixed during the execution of a program. Variables
are names assigned to memory addresses to store constants. Unlike constants, vari-
ables can be changed during the execution of a program. A variable has to belong to
a specific data type for knowing the type of data it holds.
4.2.2.3 Keywords
Keywords are the words that convey special meanings to the language compiler.
They are reserved words for special purpose and are predefined and standard in
nature. They always begin with a lowercase.
The basic keywords of an embedded software are sbit, bit, and sfr.
sbit: This data type is used in case of accessing a single bit of SFR register.
bit: This data type is used for accessing the bit addressable memory of RAM
(20h-2fh).
sfr: This data type is used for accessing an SFR register by another name. All the
SFR registers must be declared with capital letters.
{
local variable declaration
statements
}
function1 ( )
{
local variables declaration
statements
}
function2 ( )
{
local variables declaration
statements
}
4.2.3.1 Documentation/Comments
A global variable is a variable that can be accessed by more than one function and
can be defined anywhere in a program. These are declared outside the function.
For instance, in Keil μ-vision IDE, the global variables are declared as follows:
Local variable is a variable which is declared inside a function and can be accessed
by that function only.
The code snippet here shows the declaration of a local variable within a function
The main function is the function from which the execution of a program begins.
The program execution begins at the opening brace and ends at the closing brace.
(i) void main(void): The void main(void) tells that the main () will not return any
value.
For example, the code snippet for interfacing LED with 8051 in Keil μ-vision is
given below
4.2 How to Program a Microcontroller? 43
Main (void)
{ //opening braces
While (1)
{
delay (100);
led = 0;
delay (100);
led = 1;
}
} //closing braces
This includes the user-defined functions that are called in the main function, e.g., the
delay function is a user-defined function which can be expressed as
............................
DOCUMENTATION.............................
/*
Project name: LED interfacing with 8051 microcontroller
Author list: XYZ
Filename: led_blink. uvproj
Functions: delay (unsigned int msec), main ()
*/
...............................................
.......................
#include <reg51.h> //PREPROCESSOR DIRECTIVE
#define port1 P1 //port declaration
............................GLOBAL
Variables..........................
sbit led=port1^0; //global declaration
unsigned int msec;
44 4 Microcontrollers in Robotics
......................FUNCTION
Declaration............................
void delay ();
........................MAIN
FUNCTION.................................
int main (void)
{//opening braces
while (1) //
{
delay (100);
led = 0;// led off
delay (100);
led = 1;//led on
}
} //closing braces
......................SUBPROGRAM
SECTION..............................
void delay (unsigned int msec)
{
int i, j ;// local variable declaration
for (i=0; i<msec; i++) //for loop
for (j=0; j<1275; j++)
}
The 8051 microcontroller, also known as Intel MCS-51, is a single integrated chip
microcontroller (MCU) series launched by Intel in 1981. It is an 8-bit microcontrol-
ler. The 8051 microcontroller is the most popularly used microcontroller. The 8051
architecture provides many functions through its central processing unit (CPU),
random access memory (RAM), read-only memory (ROM), input/output (I/O) ports,
serial port, interrupt control, and timers in one package. AT89C51 and AT89S52 are
examples of commercial 8051 microcontroller.
4.3.1.1 AT89C51
The AT89C51 is a variant of the original Intel 8051 8-bit microcontroller from the
Atmel89 series family. It works with the popular 8051 architecture and is the mostly
used microcontroller till date. It is a popular microcontroller to begin learning on
4.3 Commonly Used Microcontrollers in Robotics 45
embedded systems. The original 8051 was developed using N-type metal oxide
semiconductor (MOS) technology, whereas AT89C51 was developed using CMOS
technology because of its low power utilization.
It is a 40-pin IC package with 4-Kb flash programmable and erasable read-only
memory (PEROM). It has four ports and all together provides 32 programmable
GPIO pins. It does not have in-built ADC module and supports only USART
communication. However, it can be interfaced with external ADC IC like the
ADC084 or the ADC0808.
The device is manufactured using Atmel high-density non-volatile memory
technology and is compatible with the industry-standard MCS-51 instruction set
and pinouts. The on-chip flash allows the program memory to be reprogrammed
in-system or by a conventional non-volatile memory programmer. By combining a
versatile 8-bit CPU with flash on a monolithic chip, the Atmel AT89C51 is a
powerful microcomputer, which provides a highly flexible and cost-effective solu-
tion to many embedded control applications.
• Pin diagram (Fig. 4.3)
Programming the AT89C51
Atmel microcontrollers can be programmed with different software available in the
market. Arduino and Keil uVision are the most used platforms out of which Keil
uVision is used most widely. In order to program the Atmel microcontroller, one
needs an integrated development environment (IDE), where the programming takes
place. A compiler, where the program gets converted into MCU readable form, is
called a hex files. An integrated programming environment (IPE) is used to dump the
hex file into the MCUs.
To dump or upload the code into Atmel IC, one needs a programmer. The most
commonly used programmer is the USB ASP which has to be purchased separately.
Also simulating the program on a software like Proteus before trying it on hardware
saves time.
Example Program: Interfacing LED with AT89C51 (Fig. 4.4)
The PIC microcontroller was developed in the year 1993 by microchip technology.
The term PIC stands for Peripheral Interface Controller. Initially, this was developed
for supporting programmable data processor (PDP) computers to control their
peripheral devices and, therefore, was named as a peripheral interface device.
These microcontrollers are of high speed and execution of a program is easy as
compared to other microcontrollers.
PIC microcontrollers are the world’s smallest microcontrollers that can be
programmed to carry out a huge range of tasks. These microcontrollers are found
in many electronic devices such as phones, computer control systems, alarm sys-
tems, and embedded systems. Various types of microcontrollers exist, even though
the best is found in the GENIE range of programmable microcontrollers.
4.3 Commonly Used Microcontrollers in Robotics 47
Every PIC microcontroller architecture consists of some registers and stack where
the registers function as Random Access Memory (RAM) and the stack saves the
return addresses. The main features of PIC microcontrollers are RAM, flash mem-
ory, Timers/Counters, EEPROM, I/O Ports, USART, CCP (Capture/Compare/PWM
module), SSP, Comparator, ADC (analog to digital converter), PSP (parallel slave
port), LCD, and ICSP (in circuit serial programming). The 8-bit PIC microcontroller
is classified into four types on the basis of internal architecture: base line PIC,
mid-range PIC, enhanced mid-range PIC, and PIC18.
Pin Diagram (Fig. 4.5):
void main ()
{
TRISB = 0; // PORT B as output port
PORTB = 1; // Set RB0 to high
do
{
//To turn motor clockwise
PORTB.F0 = 1;
Delay_ms (2000); //2 second delay
} while (1);
}
This example demonstrates an LED blink using an Arduino. To build this circuit,
one end of the resistor is connected to the digital pin corresponding to the
LED_BUILTIN constant. The positive leg of the LED is connected to the other
end of the resistor. The negative leg of the LED is connected to the ground.
Figure 4.7 illustrates an UNO board where D13 serves as the LED_BUILTIN value.
After building the circuit, the Arduino board should be plugged into the com-
puter. The Arduino Software (IDE) is then started, and the code below is entered.
The first step is to initialize the LED_BUILTIN pin as an output pin with the line
pinMode(LED_BUILTIN, OUTPUT);
In the main loop, you turn the LED on with the line:
digitalWrite(LED_BUILTIN, HIGH);
This supplies 5 volts to the LED and lights it up. Then you turn it off with the line:
digitalWrite(LED_BUILTIN, LOW);
That takes the LED_BUILTIN pin back to 0 volts, turning the LED off. In
between the on and off states, there should be enough time for a person to see the
change, so the delay() commands instruct the board to do nothing for 1000
milliseconds.
4.4.1 Code
void setup()
{
pinMode(LED_BUILTIN, OUTPUT); // initialize digital pin as an output
}
void loop() {
digitalWrite(LED_BUILTIN, HIGH); // turn the LED on
delay(1000); // wait for a second
digitalWrite(LED_BUILTIN, LOW); // turn the LED off
delay(1000); // wait for a second
}
Chapter 5
Spatial Descriptions: Frames
and Transformations
Fig. 5.1 (a) Cartesian coordinate system; (b) orientation of an object expressed using angles α, β,
and γ of the object with the coordinate axes; (c) polar coordinate system showing the polar
coordinates (ρ, θ, φ) of a point P
In the study of kinematics of a robotic system, one has to deal with the position and
the orientation of several bodies in space. To describe the position and orientation or
pose of a body in a coordinate system, two coordinate frames are used: fixed or
global frame, and moving or local frame.
representing the rotation of an object or moving frame with respect to a fixed frame is
called rotational matrix.
Derivation of the translation and rotational matrices is beyond the scope of this
book and readers can refer to Chapter 5 of the book entitled “Introduction to
Robotics” by Subir Kumar Saha for the purpose. For further discussions, we will
be considering the translational and the rotational matrices in their final forms.
Px
T= Py
Pz
Rotation of an object in a coordinate frame {X, Y, Z} can happen about the X, Y, and
Z axes. The matrices Rx, Ry, and Rz representing the rotation about X, Y, and Z axes
respectively are (Fig. 5.4)
54 5 Spatial Descriptions: Frames and Transformations
H= R
O ½1 × 3
T
1
cosα - sinα 0 Px
sinα cosα 0 Py
H=
0 0 1 Pz
0 0 0 1
The Denavit Hertenberg (DH) parameters is a set of four parameters used to describe
the position and orientation of a link or a joint with respect to its previous or
succeeding link or joint. To control the pose of a manipulator’s end effector for
performing specific tasks, the position and orientation of the end effector need to be
expressed with respect to the manipulator’s base. It can be done by describing the
transformation of the coordinate frames attached to end-effector to the base through
the intermediate joints. The description of the transformation can be easily accom-
plished using the DH parameters: joint offset, joint angle, link length, and twist
angle.
Figure 5.5 is drawn to define these parameters. The transformation between the
Frame i+1 attached to Link i and the Frame i attached to Link i-1 is represented using
the four DH parameters where i is the index of the link. A coordinate frame is
attached to each link.
With respect to Fig. 5.5, the DH parameters are defined as:
56 5 Spatial Descriptions: Frames and Transformations
6.1 Introduction
Kinematics is the study of motion of a robotic system without considering the cause
of motion. There must be force or torque causing a linear or rotary motion. In
kinematics, one does not try to find what the amount of force or torque should
be. Kinematics deals with the motion of the system and the relative motion between
different links.
To study the motion of a robotic system, it is required to establish the orientation
or pose (i.e., the information regarding position and orientation) of the links with
respect to its previous link (or base). For this purpose, the Cartesian coordinates of
the end-effector (position and orientation of a point on the end-effector) need to be
determined. The relation of pose between the successive links needs to be deter-
mined. Based on analysis, there are two types of problems in kinematics:
• Forward or direct kinematics (forward position analysis)
• Inverse kinematics (inverse position analysis)
If the joint angles are given for a particular robotic manipulator and one needs to
find the pose of end-effector, it is called forward position analysis. If the pose of the
end-effector is given or known and one needs to determine the joint angles, it is
called inverse position analysis. A forward position analysis always has a fixed
single solution. But inverse position analysis may have more than one solution. The
pictorial representation of forward and inverse kinematics is shown in Fig. 6.1.
Two main solution techniques for the inverse kinematics problem are analytical
and numerical methods. In the first type, the joint variables are solved analytically
according to given configuration data. In the second type of solution, the joint
variables are obtained based on numerical techniques.
There are two approaches in analytical methods: geometric and algebraic
approaches. Geometric approach is applied to simple robot structures, such as
2-DoF planar manipulator or less DoF manipulator with parallel joint axes.
Manipulators with more links with arms extended into three dimensions or geometry
become more tedious. In such cases, algebraic approach is more beneficial for
inverse kinematics solution.
In the forward position analysis, the joint angles (joint displacement in case of
prismatic joints) are available. Based on these values, one needs to find the position
and orientation of the end-effector.
First, let us discuss about robot mechanisms that work within a plane, i.e., planar
kinematics. Let us consider the three DoF planar robot arm shown in Fig. 6.2. The
arm consists of one fixed link and three movable links that move within the plane.
All the links are connected by revolute joints whose joint axes are all perpendicular
to the plane of the links. There is no closed-loop kinematic chain and, therefore, it is
a serial link mechanism.
One can relate the end-effecter coordinates to the joint angles and link lengths as:
The orientation of the end-effecter can also be described as the angle made by the
center-line of the end-effector with the positive X-coordinate axis. This end-effector
orientation Qe is related to the joint angles as:
6.2 Forward Position Analysis 59
Qe = θ1 þ θ2 þ θ3 ð6:3Þ
The above three equations describe the position and orientation of the robot
end-effector as viewed from the fixed coordinate system in relation to the joint
angles. In general, a set of algebraic equations relating the position and orientation of
a robot end-effector or any significant part of the robot to the joint angles is called
kinematic equations or, more specifically, forward kinematic equations in robotics.
Planar kinematics is mathematically much more tractable compared to three-
dimensional kinematics. For three-dimensional forward position analysis, one can
follow the four standard steps given below.
Step 1: Attach the coordinate frames in each of the links.
Step 2: Define the Denavit Hartenberg (DH) parameter for every link.
Step 3: Write the homogeneous transformation matrix (HTM) of each frame with
respect to the previous frame.
Step 4: The resultant HTM of end-effector with respect to the base is determined by
post-multiplication of the previous individual HTMs.
To understand the above steps, one can consider the following serial manipulator
in Fig. 6.3 and perform the forward position analysis.
Attach coordinate frames to each of the n + 1 links of the robot, with frame
1 attached to the fixed link and frame n+1 attached to the end-effector. Define the DH
parameters for every link, i.e., link #1, link #2, link #3,. . .. . .. . .. . .. . .. . . ., link #n.
The HTMs T1, T2, T3, . . .. . .Tn, where Ti for i = 0, 1, 2, . . .. . ., n represents the
transformation of body i with respect to its previous link i-1 or frame i+1 with respect
to the frame attached to it, i.e., frame i.
Now, obtain the individual HTM from the four elementary transformations
corresponding to the DH parameters. The HTM of the end-effector (frame n + 1)
60 6 Kinematics and Dynamics
T = T1 T2 . . . . . . Tn ð6:4Þ
Equation (6.4) is called the closure equation of the robot. Orientation Q of the
end-effector with respect to the fixed frame can be expressed as:
Q = Q1 Q2 . . . . . . Qn ð6:5Þ
ci - si 0 ai ci
si ci 0 ai s i
Ti =
0 0 1 0
0 0 0 1
6.2 Forward Position Analysis 61
where, Ti represents the HTM for links 1 and 2 having ci = cosθi, si = sinθi, ai = link
length with i = 1, 2.
The solution of forward position analysis is:
T = T1 :T2
62 6 Kinematics and Dynamics
c1 - s1 0 a1 c1 c2 - s2 0 a2 c2
s1 c1 0 a1 s 1 s2 c2 0 a2 s 2
T=
0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1
c12 - s12 0 a1 c1 þ a2 c2
s12 c12 0 a1 s 1 þ a2 s 2
=
0 0 1 0
0 0 0 1
where,
c12 = cosθ12
θ12 θ1 þ θ2
s12 = sinθ12
Px = a1 c1 þ a2 c2 ð6:6Þ
P y = a1 s 1 þ a2 s 2 ð6:7Þ
By knowing the values of link lengths (a1 and a2) and joint angles (θ1 and θ2), one
can determine the configuration of end-effector of the two-link planar arm.
The problem of finding the set of joint angles for a given end-effector position and
orientation is referred to as inverse kinematics or inverse position analysis. In this
section, the problem of moving the end-effector of a manipulator arm to a specified
position and orientation is discussed. For this, one needs to find the joint angles that
lead the end-effector to the specified position and orientation. This is the inverse of
the previous problem and is thus referred to as inverse kinematics. The kinematic
equation must be solved for joint angles, given the end-effecter position and
orientation. Once the kinematic equation is solved, the desired end-effector motion
can be achieved by moving each joint to the determined pose.
In forward kinematics problem, the end-effecter location is determined uniquely
for any given set of joint angles. On the other hand, inverse kinematics is more
complex in the sense that multiple solutions may exist for the same end-effecter
pose. Also, solutions may not always exist for a particular range of end-effector
poses and joint angles. Furthermore, since the kinematic equation is comprised of
nonlinear equations with trigonometric functions, it is not always possible to derive a
closed-form solution, which is an explicit inverse function of the kinematic equation.
There are two solution approaches: geometric and algebraic used for deriving the
analytical inverse kinematics solutions. When the kinematic equation cannot be
solved analytically, numerical methods are used in order to derive the desired joint
angles.
6.3 Inverse Kinematics 63
p2x þ p2y = l21 ðc2 θ1 þ s2 θ1 Þ þ l22 ðc2 θ12 þ s2 θ12 Þ þ 2l1 l2 ðcθ1 :cθ12 þ sθ1 :sθ12 Þ
= l21 þ l22 þ 2l1 l2 fcθ1 :ðcθ1 :cθ2 ‐sθ1 :sθ2 Þ þ sθ1 :ðsθ1 :cθ2 þ cθ1 :sθ2 Þg
= l21 þ l22 þ 2l1 l2 c2 θ1 :cθ2 ‐cθ1 :sθ1 :sθ2 þ s2 θ1 :cθ2 þ sθ1 :cθ1 :sθ2
= l21 þ l22 þ 2l1 l2 cθ2 ðc2 θ1 þ s2 θ1 Þ
= l21 þ l22 þ 2l1 l2 cθ2
ð6:12Þ
Similarly
2
p2x þ p2y - l21 - l22
sθ2 = ± 1- ð6:14Þ
2l1 l2
2
p2x þ p2y - l21 - l22 p2x þ p2y - l21 - l22
θ2 = A tan2 ± 1- , ð6:15Þ
2l1 l2 2l1 l2
Multiplying each side of Eq. (6.8) by cθ1 and Eq. (6.9) by sθ1
On simplification,
Multiply each side of Eq. (6.8) by -sθ1 and Eq. (6.9) by cθ1 and add the resulting
equations:
Now, multiply each side of Eq. (6.19) by px and Eq. (6.20) by py and add the
resulting equations:
6.3 Inverse Kinematics 65
and
2
px ðl1 þ l2 cθ2 Þ þ py l2 sθ2
sθ1 = ± 1- ð6:23Þ
p2x þ p2y
2
px ðl1 þ l2 cθ2 Þ þ py l2 sθ2 px ðl1 þ l2 cθ2 Þ þ py l2 sθ2
θ1 = A tan2 ± 1- ,
p2x þ p2y p2x þ p2y
ð6:24Þ
This is how one can perform inverse kinematics approach for a 2-link planar arm
in a geometric solution approach. But the difficulty level will increase with the
increase in the number of links. Therefore, algebraic approach is preferred to
perform inverse kinematics for manipulator with more than three links.
Application of this approach to a 6-DoF manipulator with revolute joints qi and link
lengths li (i = 1, 2, 3, 4, 5, 6) is considered for discussion. The homogeneous
transformation matrix (HTM) for the manipulator can be written as:
1 4 5
6 T = 1 Tðq1 Þ 2 Tðq2 Þ 3 Tðq3 Þ 4 Tðq4 Þ 5 Tðq5 Þ 6 Tðq6 Þ ð6:25Þ
0 0 2 3
To find the inverse kinematics solution for the first joint (q1) as a function of the
known elements of base
end - effector T, the link transformation inverse is premultiplied as
follows:
0 0 0 0 1 2 3 4 5
1 Tðq1 Tðq1 ÞÞ -1 6T = 1 T ðq 1 Þ -1 1 Tðq1 Þ 2 Tðq2 Þ 3 Tðq3 Þ 4 Tðq4 Þ 5 Tðq5 Þ 6 Tðq6 Þ
0
With [01 Tðq 1)]-1 01 Tðq 1) = I, I being the identity matrix, the above equation
becomes:
0
1 Tðq1 -1 06 T = 12 Tðq2 Þ 23 Tðq3 Þ 34 Tðq4 Þ 45 Tðq5 Þ 56 Tðq6 Þ ð6:26Þ
0 1 2 3 4 5
1 Tðq1 2 Tðq2 Þ -1 6 T = 3 Tðq3 Þ 4 Tðq4 Þ 5 Tðq5 Þ 6 Tðq6 Þ ð6:27Þ
0
0 1 2 3 4 5
1 Tðq1 2 Tðq2 Þ 3 Tðq3 Þ -1 6 T = 4 Tðq4 Þ 5 Tðq5 Þ 6 Tðq6 Þ ð6:28Þ
0
0 1 2 3 4 5
1 Tðq1 2 Tðq2 Þ 3 Tðq3 Þ 4 Tðq4 Þ -1 6 T = 5 Tðq5 Þ 6 Tðq6 Þ ð6:29Þ
0
0 1 2 3 4 5
1 Tðq1 2 Tðq2 Þ 3 Tðq3 Þ 4 Tðq4 Þ 5 Tðq5 Þ -16 T = 6 Tðq6 Þ ð6:30Þ
0
Jacobian matrix, used in the kinematic and dynamic analysis of a robot, relates the
relation of joint velocities to the linear or angular velocities of end-effector. For the
serial manipulator in Fig. 6.3, velocity of the end-effector using Jacobian can be
expressed as:
θ_ 1
p_ x θ_ 2
v :
Velocity, V = p_ y = =J :
ω :
p_ z
where,
px, py, pz are the position vectors
v is the joint linear velocity
ω is the joint angular velocity
θ1, θ2 . . . are the joint variables (joint angle for revolute joints, displacement for
prismatic joints)
y1 = f 1 ð x1 , x2 , x3 , . . . . . . . . . . . . . . . . . . . . . . . . . . . ::Þ ð6:31aÞ
y2 = f 2 ð x1 , x2 , x3 , . . . . . . . . . . . . . . . . . . . . . . . . . . . ::Þ ð6:31bÞ
yn = f n ð x1 , x2 , x3 , . . . . . . . . . . . . . . . . . . . . . . . . . . . ::Þ ð6:31cÞ
δf 1 δf 1
δy1 = x þ δx þ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . :: ð6:32aÞ
δx1 1 δx1 2
δf 2 δf 2
δy2 = x þ δx þ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . :: ð6:32bÞ
δx1 1 δx1 2
δf n δf n
δyn = x þ δx þ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . :: ð6:32cÞ
δx1 1 δx1 2
y_ 1 x_ 1
y_ 2 x_ 2
: = J : ð6:33Þ
: :
: :
Y_ = J X_ ð6:34Þ
The matrix J is called the Jacobian of the system. The following example is
considered for understand:
r1 cos θ
Let, position vector be, P = r 2 sin θ
0
p_ x = -r 1 sin θ:θ_
p_ y = r 2 cos θ:θ_
p_ z = 0
p_ x - r1 sin θ
p_ y = r2 cos θ θ_
p_ z 0
- r 1 sinθ
J= r 2 cosθ
0
c1 - s1 0 l1 c2 - s2 0 l2
s1 c1 0 0 s2 c2 0 0
0
1T = 1
2T =
0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1
c12 - s12 0 l2 c2 þ l1
s12 c12 0 l 2 s1
0
2T = 01 T:12 T = ð6:35Þ
0 0 1 0
0 0 0 1
l2 c2 þl1
P= l2 s1
0
And,
p_ x = -l2 s1 θ_ 1
p_ y = l2 c1 θ_ 1
p_ z = 0
p_ x - l2 s1
p_ y = l2 c1 θ_ 1 ð6:36Þ
p_ z 0
- l2 s1
Velocity vector, V = l2 c1 θ_ 1
0
- l2 s1
Jacobian, J = l2 c1 ð6:37Þ
0
2
2
J = 0 R: 0 J, where 20 R is rotational matrix and 0 J is Jacobian with respect to
frame 0.
Now,
6.4 Jacobian Matrix 71
c12 s12 0
0 -1
0R = 2R = - s12 ð6:38Þ
2
c12 0
0 0 1
Therefore,
- l2 c12 s1 þ l2 s12 c1
V = J θ_ = θ_
2
2
l2 s12 s1 þ l2 c12 c1 ð6:40Þ
0
In this method, the velocities are calculated based on the previous link’s velocity and
thereby the Jacobian can be calculated. For example, let us consider the 3 link
manipulator shown in Fig. 6.8.
To obtain the velocity of end-effector, one needs to find the velocity of
(i) Base frame (0th frame)
(ii) 1st frame
where,
ω = angular velocity
V= linear velocity
i
Piþ1 = position Vector
0
θ_ iþ1 : Z Iþ1 =
iþ1
0 (for revolute joint)
θ_ iþ1
0
d_ iþ1 : Z Iþ1 =
iþ1
0 (for prismatic joint)
d_ iþ1
Example 6.3 Consider the manipulator in Fig. 6.9 and solve the questions
c1 - s1 0 0 - s2 c2 0 0
s1 c1 0 0 0 0 1 0
0
1T = 1
2T = 1
2T
0 0 1 0 c2 - s2 0 0
0 0 0 1 0 0 0 1
1 0 0 0
0 0 -1 d3
=
0 1 0 0
0 0 0 1
Now,
- c1 s 2 s1 c1 c2 c 1 c2 d 3
- s1 s2 c1 s 1 c2 s 1 c2 d 3
0
3T = 01 T:12 T:23 T = ð6:43Þ
c2 0 s2 s3 d 3
0 0 0 1
0
0
V 0 = 0 ω0 = 0 ð6:44Þ
0
For i = 0,
0
1
ω1 = 10 R:0 ω0 þ θ_ 1 :Z = 0 þ θ_ 1 :Z = 0 ð6:45Þ
θ_ 1
1 0
1
V 1 = 0 R 0 V 0 þ 0 ω0 × 0 P 1 þ 0 = 0 ð6:46Þ
0
Now for i = 1
2
ω2 = 21 R 1 ω1 þ θ_ 2 Z
= 1 R -1 1 ω2 þ θ_ 2 Z
2
- S2 0 C2 0 0 C 2 θ_ 1 ð6:47Þ
= - C2 0 - S2 þ 0 þ 0 = - S2 θ_ 1
0 -1 0 θ_ 1 θ_ 2 θ_ 2
74 6 Kinematics and Dynamics
2
V 2 = 21 R 1 V 1 þ 1 ω1 × 1 P2 þ 0
- S2 0 C2 0 0 0 0
ð6:48Þ
= - C2 0 - S2 0 þ 0 × 0 = 0
0 -1 0 0 θ_ 1 0 0
Now i = 2,
3
ω3 = 32 R 2 ω2 þ 0
1 0 0 C2 θ_ 1 C2 θ_ 1
ð6:49Þ
= 0 0 1 - S2 θ_ 1 = θ_ 2
0 -1 0 θ_ 2 - S2 θ_ 1
3
V 3 = 32 R 2 V 2 þ 2 ω2 × 2 P2 þ d 3 Z 3
1 0 0 0 C2 θ_ 1 0 0
= 0 0 1 0 þ - S2 θ_ 1 × - d3 þ 0
0 -1 0 0 θ_ 2 0 d_ 3 ð6:50Þ
d3 θ_ 2
= - d 3 C2 θ_ 1
d_ 3
d3 θ_ 2
The linear velocity at end-effector frame, 3 V 3 = - d3 C 2 θ_ 1
d_3
C 2 θ_ 1
The angular velocity at end-effector frame, 3 ω3 = θ_ 2
- S2 θ_ 1
d 3 θ_ 2 0 d3 0 θ_ 1
3
V= - d3 C 2 θ_ 1 = - d3 C2 0 1 θ_ 2 ð6:51Þ
d_3 0 -1 0 d_3
0 d3 0
3
J= - d3 C2 0 1
0 -1 0
6.5 Forward and Inverse Dynamics 75
0
J = 03 R 3 J
- C 1 S2 S1 C1 C2 0 dS 0
= - S2 C1 S1 C 2 - d3 C2 0 0
C2 0 S2 0 0 1 ð6:52Þ
- S1 d 3 C 2 - C 1 S2 d 3 C1 C2
= - C1 d3 C2 - S1 S2 d 3 S1 C 2
0 C2 d3 S2
M(q) ϵ Rn×n is positive definite intertia matrix (can be inverted into forward
simulation of dynamics). The inertia term depends on the mass distribution of the
robotic link and is expressed in terms of moment of inertia of that matrix.
_ ϵ Rn are the centripetal and the coriolis forces. The coriolis components will
C(q, q)
appear whenever there is a sliding joint on a rotary link.
G(q) ϵ Rn are the gravitational forces
u is the joint torque
76 6 Kinematics and Dynamics
MðqÞ€q þ Fð q, q_ Þ = u ð6:54Þ
In case of a robot, knowing its physical parameters, one normally wishes to solve
two problems related to its dynamics. They are forward and inverse dynamics. The
schematic of forward and inverse dynamics is presented in Fig. 6.10.
Forward dynamics computes joint motions of a robot for a given set of joint
torques or forces as a function of time. Forward dynamics is required to find the
response of the robot arm corresponding to the applied torques or forces at the joints.
It is used primarily for computer simulation of a robot, which just shows how a robot
will perform when it is built.
In case of forward dynamics, if the applied torques are known, one can use the
equation below to simulate the dynamics of the system:
Inverse dynamics deals with the evaluation of joint torques and forces required
for a set of joint motions for a particular force at the end-effector. The inverse
dynamics problem is used to find the actuator torques or forces required to generate a
desired trajectory of the robot’s end-effector. An efficient inverse dynamics model
becomes extremely important for real-time control of robots.
Two basic approaches used for dynamic analysis of a robot are considered for
discussion in this section: Euler-Lagrange and Newton-Euler equations of motion.
6.5 Forward and Inverse Dynamics 77
L = T–U ð6:56Þ
where L is Lagrangian, T is the total kinetic energy, and U is total potential energy of
the system.
The kinetic energy depends on both configurations, that is, (a) position and
orientation and (b) velocity of the links of a robotic system, whereas the potential
energy depends only on the configuration of the links. In such a case, the Euler-
Lagrange equations of motion are given by
d ∂L ∂L
- = ∅i , for i = 1, ::n ð6:57Þ
dt ∂q_ i ∂qi
78 6 Kinematics and Dynamics
where θ12 = θ1 + θ2. Therefore, the system has 6-4 = 2 DoF, and the independent set
of generalized coordinates are θ1 and θ2.
Kinetic Energy
Consider a robot consisting of n rigid links as shown in Fig. 6.12a.
The kinetic energy of a typical link i shown in Fig. 6.12b denoted by Ti is given
by
6.5 Forward and Inverse Dynamics 79
Fig. 6.12 A serial chain robot. (a) A serial chain manipulator, (b) The ith body (or link)
80 6 Kinematics and Dynamics
1 1
T i = mi c_i T c_i þ ⍵Ti I i ⍵i ð6:60Þ
2 2
where
c_ i = J c,i θ_ is three-dimensional velocity vector of the mass center Ci of the ith link
⍵i = J ⍵,i θ_ is three-dimensional angular velocity vector of the ith link
mi : Mass of the ith link (a scalar quantity)
Ii : The 3×3 inertia tensor or matrix of the ith link about Ci
The total kinetic energy of the robot is the sum of the contributions of each rigid
link due to the relative motion and is given by
n n
1 1
θ_ I i θ_
T
T= mi c_i T c_i þ ⍵Ti I i ⍵i = ð6:61Þ
2 i=1
2 i=1
where the n × n matrix I i = mi J Tc,i J c,i þ J T⍵,i I i J ⍵,i and J Tc,i J c,i and J T⍵,i I i J ⍵,i are
n × n matrices.
Moreover, if the n × n matrix is defined by
n
I= Ii ð6:62Þ
i=1
1 _T _
T= θ Iθ ð6:63Þ
2
The matrix I is called the Generalized Inertia Matrix (GIM) of the robot.
Potential Energy
The potential energy stored in link i is defined as the amount of work required to
raise the center of mass of link i from a horizontal reference plane to its present
position under the influence of gravity. Similar to kinetic energy, the total potential
energy stored in a robot is given by the sum of the contributions of each link and is
given by
n
U= - i=1
mi cTi g ð6:64Þ
where g is the vector due to gravity acceleration and the vector ci is a function of joint
variables, i.e., θi’s of the robot.
Equation of Motion
Using the values of potential and kinetic energy of the robot in Eq. (6.56), the
Lagrangian is obtained as:
6.5 Forward and Inverse Dynamics 81
n
1 _T _
L=T -U= θ I i θ þ mi cTi g ð6:65Þ
i=1
2
Let iij be the (i, j) element of the robot’s GIM I, then Eq. (6.65) can be written as
n n
1 _ _
L= i θ θ þ mi cTi g ð6:66Þ
i=1 j=1
2 ij i j
Next, the Lagrangian function is differentiated with respect to θi, θ_i , and t to
obtain the dynamic equations of motion. After differentiating L with respect to θi, θ_i ,
and t, and then combining, the dynamic equation of motion is derived as:
n
iij θj þ hi þ γ i = τi ð6:67Þ
j=1
for i = 1 where
n n
∂iij 1 ∂iij _ _
hi = - θθ ð6:68Þ
j=1 k=1
∂θk 2 ∂θi j k
n T
ðiÞ
γi - jc,j mj g ð6:69Þ
j=1
Writing Eq. (6.67) for all the n generalized coordinates, the equation of motion
can be written in a compact form as:
I €θ þ h þ γ = τ ð6:70Þ
Newton-Euler Formulation
The motion of a rigid body can be decomposed into translational motion with respect
to an arbitrary point fixed to the rigid body, and the rotational motion of the rigid
body about that point. The dynamic equations of a rigid body can also be represented
by two equations: one describes the translational motion of the centroid (or center of
mass), while the other describes the rotational motion about the centroid. The former
is Newton’s equation of motion for a mass particle, and the latter is called Euler’s
equation of motion.
As shown in Fig. 6.13, let F be the fixed frame. Moreover, the vector m is the
linear momentum of the rigid link or the body B expressed in the frame F. The
corresponding angular momentum is represented by the vector m. ~ Also, let vectors
f oand no be the resultant forces and moments exerted on B at and about the origin O,
respectively. Then, the Newton’s linear equation of motion can be stated as the time-
82 6 Kinematics and Dynamics
Fig. 6.13 Resulting force and moment acting on a rigid body (link)
derivative of the linear momentum m, which equals to the external forces acting
on it, i.e.,
dm
fo = ð6:71Þ
dt
On the other hand, the Euler’s equation of rotational motion gives the time rate of
change of angular momentum m ~ o to be equal to the external moment acting on it
(remember the point about which the angular momentum and external moment are
taken, which should be same), i.e.,
~o
dm
no = ð6:72Þ
dt
d
m= ðmcÞ = m_c ð6:73Þ
dt
For a body of constant mass substituting Eq. (6.73) into Eq. (6.71) gives
dc_
fo =m = m€c ð6:74Þ
dt
6.5 Forward and Inverse Dynamics 83
Equation (6.74) is called Newton’s equation of motion for the center of mass.
Again, the angular momentum of a rigid body about its center of mass C is given by
~ I⍵
m ð6:75Þ
where I is the inertia tensor of the body B about its center of mass C.
Substituting Eq. (6.75) into Eq. (6.72), we get
_ þ ½⍵ × ½I ½⍵
½nc = ½I c ½⍵ ð6:75Þ
c c c c
Equation (6.75) is called Euler’s equation of rotational motion for the center of
mass coordinate frame.
For a robot to accomplish any task, the pose, i.e., position and orientation of the
links, joints, and end–effector, is to be known. Kinematic analysis for determining
the pose using the forward and inverse kinematic techniques presented in this
chapter can be simulated using the software tools mentioned in Sect. 8.6.3 in
Chap. 8. Further, these tools will also enable the reader to perform the dynamic
analysis of a robot manipulator, which is required for their motion control.
Chapter 7
Control Systems in Robotics
Natural and man-made control systems: The naturally present biological systems in
a living being and the environment are natural control systems. The control
systems developed by man are man-made control systems; e.g., the human
digestive system is a natural control system, whereas an automobile engine is a
man-made control system.
Combinational control systems: The fusion of natural control systems and
man-made control systems forms combinational systems, e.g., a prosthetic hand
control using body signals and electronic systems.
Some basic concepts for understanding a control system are state, estimate, refer-
ence, error, and dynamics.
7.3.1 State
State in a robotic control system refers to the output of the system. It depends on its
previous states, the stimulus or input applied to the actuators of the robotic system,
and the physics of the robot’s environment. Pose, speed, velocity, angular velocity,
and force are some of the states in a robotic control system.
7.3.2 Estimate
The exact state of a robotic system cannot be determined by the robot, but it can be
estimated using sensors which equip the robot. Sensors with good accuracy, sensi-
tivity, and precision are essential to produce a good estimation of the state.
7.3.3 Reference
7.3.4 Error
Error is the difference between the reference and the estimate state of a robotic
system.
88 7 Control Systems in Robotics
7.3.5 Dynamics
Dynamics in a robotic system describes the behavior of the system under non-static
conditions with reference to time. Figure 7.2 shows a schematic of a control system.
The state of the system is represented by x, the estimated state is represented by y,
the control signal is represented by u, the reference is represented by r, and the error
is represented by e. It is always the key responsibility of an engineer to build a
controller that reacts and produces control signal u, such that e~0 and x~r.
For developing a robotic system to perform a desired task, we require precise control
of variables like pose, velocity, force, and torque. A robotic control system executes
a planned sequence of motions and forces in the presence of unforeseen errors such
as inaccuracies in the model of the robot, tolerances in the work piece, static friction
in joints, mechanical compliance in linkages, electrical noise on transducer signals,
and limitations in the precision of computation. Control systems allow every joint or
wheel of the robot to follow a specific set of commanded motions and functions.
Control systems in robotics are broadly categorized into two types as described in the
following sections.
The systems in which the output has no effect on the control action are called open-
loop control systems. In any open-loop system, the output isn’t compared with the
reference input. Thus, to every reference input, there corresponds a hard and fast
operating condition. The accuracy of such systems depends on the calibration.
Within the presence of disturbances, an open-loop system cannot perform the
required task because when the output changes due to disturbances, it is not followed
by changes in its input to correct its output.
Feedback control systems are closed-loop control systems. The terms, closed-loop
control and feedback control, are often used interchangeably. The actuating error
signal which is the difference between the input and the feedback signal is fed to the
controller to reduce the error and produce an output of the system to a desired value.
Adaptive control uses feedback to update the model of the process based on the
results of previous actions. The measurement of the results of previous actions is
used to adapt the process model to correct for changes in the process. This type of
adaption corrects for errors in the model due to long-term variations in the environ-
ment, but it cannot correct for dynamic changes caused by local disturbances.
Robot control architectures are conceptual structures for organizing robot control
such that one can design controllers systematically. The term robot architecture is
used to refer to how a system is divided into sub-systems and how the subsystems
interact. Robot architectures and programming began in the late 1960 with the
Shakey robot (schematic shown in Fig. 7.3) at Stanford University.
Shakey’s architecture was decomposed into three functional elements: sensing,
planning, and executing. The sensing system translated the camera image into an
internal world model. The planner took the internal world model and a goal and
generated a plan (i.e., a series of actions). The executor took the plan and sent the
actions to the robot’s actuators.
Figure 7.4 shows a schematic of the sense-plan-act based architecture for a robotic
system. Sensing provides the robot with information about the state of itself and the
environment. From this, a decision is made about how the robot should act and the
controller commands the robot actuators to perform accordingly. The differences
among robot control architectures lie almost entirely in the control logic, i.e., in the
controller.
Robot control architecture can be divided into two classes: model-based archi-
tecture and sensor-based architecture.
Some robot control architectures use internal models to help the controller to decide
what to do. These models are typically mathematical models, maps of the environ-
ment, or mechanical solid models. Model-based control architecture involves inten-
sive processing which costs power and computational time. However, to store
models used to represent robot control, architectures need memory. Further,
model-based architectures need constant updates for applications in dynamic
environment.
There are three common robot control approaches.
Hierarchical Approach
Hierarchical approach as presented in Fig. 7.5 makes extensive use of stored
information and models to predict what might happen under different inputs,
attempting to optimally choose an output. This allows the robot to plan a sequence
of actions to achieve complex goals or exhibit a behavior, thereby allowing a
Fig. 7.4 Schematic of sense-plan-act structure common to all robot control architectures
92 7 Control Systems in Robotics
designer to give commands that are interpreted in terms of the robot model. This
paradigm is often called sense-plan-act (SPA), thereby substituting plan for decide in
our usual scheme.
The control components in hierarchical control are said to be horizontally orga-
nized. Information from the environment in the form of sensor data has to filter
through several intermediate stages of interpretation before finally becoming avail-
able for a response. The main architectural features of the SPA approach are that
sensing flowed into a world model, which was then used by the planner, and that
plan was executed without directly using the sensors that created the model.
The emphasis in these early systems was in constructing a detailed world model
and then carefully planning out what steps to take next. The problem was that, while
the robot was constructing its model and deliberating about what to do next, the
world was likely to change. So, these robots exhibited the odd behavior that they
would look (acquire data, often in the form of one or more camera images), process,
and plan, and then (often after a considerable delay) they would result into action for
a couple of steps before beginning the cycle all over again. Hierarchical architectures
tend to support the evolution of intelligence from semi-autonomous control to fully
autonomous control.
Reactive Approach
A reactive approach used in robotics control as presented in Fig. 7.6 is a connection
of sensing with acting.
In 1986, Rodney A. Brooks published an article which described a type of
reactive approach called the subsumption architecture. A subsumption architecture
is built from layers of interacting finite-state machines, each connecting sensors to
actuators directly. These finite-state machines were called behaviors (leading some
7.3 Basic Concepts of Control System in Robotics 93
Hybrid Approach
Hybrid control approach shown in Fig. 7.8 combines the hierarchical and reactive
control approach.
Hybrid approach can be described as plan, then sense, and act. Planning covers a
long-time horizon and it uses global world model. Sense-act covers the reactive
(real-time) part of the control. Hybrid approach may be characterized by a layering
of capabilities, where low-level layers provide reactive capabilities and high-level
layers provide the more computationally intensive deliberative capabilities.
The most popular variant on the hybrid approach is three layered: controller or
reactive layer, sequencer or executive layer, and planner or deliberative layer. The
94 7 Control Systems in Robotics
Robotic sensors are used to estimate a robot’s internal condition and surrounding
environment. The environmental information is passed to a controller through
sensors to enable appropriate behavior by the robot. The controller acts as the
brain of the robot and has the instructions programmed by the roboticists. Figure 7.9
represents the types of sensors used in robotics.
Internal sensors are used to collect robot’s information like joint position, veloc-
ity, acceleration, orientation, and speed. External sensors are used to collect infor-
mation about the external environment surrounding for robot’s applications like
navigation of robot, object handling, and identification.
Contact sensors have physical contact with the object to be sensed. Commonly used
contact sensors are as follows.
Touch Sensor
Touch sensors are used to indicate whether a contact has been established with an
object or not. Also, these sensors indicate the presence of the object within the
fingers of the end effector. Humans have skin that act as a contact sensor. When one
touches a particular object or a material, one understands its physical properties like
shape, texture, and temperature. Analogous to the human skin is what is known as
tactile sensors in robots. Figure 7.10 shows a typical arrangement of tactile sensors
on a gripper and layers of it.
Fig. 7.10 Tactile sensors mounted on a gripper and layers of tactile sensor
Slip Sensor
Slip sensors are used to specify the contact made by the external objects and
determine the slip of the object over the robot body.
Figure 7.11 shows a schematic of a slip sensor. When the object comes in contact
tangentially to the rotating disc, the disc rotates. This leads to the change in the
contact position of the plunger with the resistive contact plate. This causes changes
in resistance across the plunger and contact plate indicating slipping of the object.
Force Sensor
Force sensors are commonly used in a robot’s end effector for measuring the reaction
forces developed at the interfaces of mechanical assemblies. Most of the force
sensors work on the principle of change in resistance or capacitance due to the
deflection in the sensing membrane while applying a force. Figure 7.12 shows a
robot gripper with force sensor.
96 7 Control Systems in Robotics
d = btanθ
7.3 Basic Concepts of Control System in Robotics 97
where b is the distance between the detector and the source of the sensor, and θ is the
angle between the source and the plane perpendicular to detector. The parameter b is
known for specific sensors and θ is estimated using the principles of optics. Using b
and θ, the distance of the object from the robot is calculated.
Proximity Sensor
Proximity sensors are used to indicate the presence of an object within a specified
distance/interval without having a physical contact with the object. When an object
comes near the circuit, due to the presence of high-frequency magnetic field,
deflection occurs in the internal circuit which sends the signal to the controller,
and thus it opens or closes or actuates the actuator. One can set a particular target to
the proximity sensor, as shown in Fig. 7.14. Generally, this type of sensors has a
proximity switch and an active face. As soon as the target comes to the particular
range, the active face sends signal to the proximity switch which sends the signal to
the controller. This is generally a photoelectric sensor which can be installed in the
robot to detect the presence of an object in its work volume.
Vision Sensor
Vision sensors are used to recognize three-dimensional objects in the form of 3D/ 2D
images or in two dimensions. These sensors are also called robot vision, machine
vision, or artificial vision sensors (Fig. 7.15).
This can be used to detect the good or bad products in an inspection department of
a manufacturing company. A vVision sensor first inspects a part and compares
it with an image that was already fed to the computer. The outcome of the process
informs whether the object has passed or failed the quality test.
Robots are employed for both industrial and personal purposes. Most robots in
industries follow planned procedures to carry out their operations and are not
accustomed to deal with unexpected or unplanned situations. Traditional techniques
of controlling a robot such as PID controllers have been used over the years, but they
can perform efficiently in limited environmental conditions only. With an increase in
operational environment complexity, robots are expected to perform self-diagnosis,
self-reorganization, and detect fault tolerance on their own. These features need
decision-making capability by the robot leading to the requirement of intelligent
control systems. An intelligent control system enables a robot to learn how to
respond to unknown dynamic environment or situations. Intelligent control systems
are generally achieved using advanced software computations integrated with sen-
sors. A schematic representation of intelligent techniques from the domain of
artificial intelligence for robotics control is presented in Fig. 7.16.
98 7 Control Systems in Robotics
robotic system would start with crisp sensor readings (e.g., numeric values from
proximity sensors), translating them into linguistic classes in the fuzzifier, firing
appropriate rules in the fuzzy inference engine, and generating a fuzzy output value.
The fuzzy output value is further translated into a crisp value that represents actuator
control signals. Fuzzy logic allows a certain type of discrete encoding of the fuzzy
input by using rule-based systems.
Base rules are represented as a collection of if–then rules which take the
general form:
Neural networks are computing systems that mimic the functioning of the human
brain. It is based on collection of connected nodes called artificial neurons, like
neurons present in the human brain. Like biological neurons that communicate with
one another through synapses, artificial neurons also receive, process, and pass on
the signals to one another. Artificial Neural Networks (ANNs) attempt to emulate the
biological neural network. An artificial neuron which is the basic element of an NN
contains three main components: weights, threshold, and activation function.
Chapter 8
Academic Projects and Tools in Robotics
This chapter presents projects on robotics and embedded systems demonstrating the
implementation of the concepts discussed in the previous chapters. It is envisaged
that these projects will encourage readers for undertaking innovative projects fol-
lowing a systematic approach for realizing textbook concepts through real-world
visualization and thereby attempting to solve societal issues. The tools and equip-
ment usually used while doing robotics are briefed in this chapter.
The objective of this project is to enable students to acquire skills in terms of creative
thinking, time management, and self-learning while earning academic credentials.
This project was awarded the first position in IEEE Best Students’ Project Award
during March 1–2, 2012, held in Maulana Azad National Institute of Technology,
Bhopal.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 101
N. M. Kakoty et al., Introduction to Embedded Systems and Robotics,
https://doi.org/10.1007/978-3-031-73098-6_8
102 8 Academic Projects and Tools in Robotics
4. Contingency plan to deal with situations that could not be seen while planning for
the project
5. Documentation and presentation of the project
Fig. 8.1 Developed hand grasping four types of objects (a) oval (egg), (b) cuboid (mobile phone),
(c) circular (cricket ball), (d) cylindrical (cup)
8.2 Development of a Biomimetic Prosthetic Finger 103
A human hand helps interact with the environment in our day-to-day activities.
Finger amputation is one of the most frequently confronted forms of partial hand
losses. Despite serious research works, little progress has been made toward a
functional prosthetic finger. Low functionality with respect to degrees of freedom
(DoF), joint range of motion (RoM), and unrelated actuation mechanism are the
main reasons for non-acceptance of finger prosthesis by amputees. Furthermore,
available market variants were far from the human finger in terms of deploying
abduction-adduction movement.
This project presented the development of a prosthetic finger prototype following
a biomimetic approach inspired by human finger anatomy and physiology. The
developed protoype mimicked the human finger in dimensions, joint RoM, DoF,
and electromyogram (EMG)-based control mechanism. It could perform both flex-
ion-extension and abduction-adduction movements following the EMG recognition
104 8 Academic Projects and Tools in Robotics
for the intended type of movements. The torque from the actuation unit was
transmitted to the finger joints through an antagonistic tendon mechanism. The
biomemetic approach followed for the design and development was categorized
into three main stages: (i) detailed study of the human finger anatomy and physiol-
ogy, (ii) derivation of useful functions and processes like development of computer
aided design (CAD) finger prototype, selection of material to be used as skeletal,
choice of actuation and tendon mechanism as well as static and dynamic constraints,
and (iii) imitation of the derived functions and processes in artificial systems, i.e.,
into the prototype shown in Fig. 8.2. The working video of this project is available at
“https://www.youtube.com/watch?v=P9Hoo2VvEG8”.
Realizing the challenges in military war field, an ambitious project on a mobile robot
equipped with hazardous gas sensing capability was carried out during 2018–2020 at
a graduate level. The project involved development of a mobile robot prototype
customized with a module of gas sensors, human detection sensor, GPS, and
obstacle detection sensor with wireless monitoring system. This project was nomi-
nated as one of the 21 innovative projects in the e-Yantra Ideas Competition 2019
organized by the Indian Institute of Technology Bombay. This project was then
published as a research paper entitled “A Mobile Robot for Hazardous Gas Sensing”
in IEEE International Conference on Computational Performance Evaluation, 2020.
First, the reliable components to be used and the tasks to be followed are identified:
8.3 A Mobile Robot for Hazardous Gas Sensing 105
The developed mobile robot shown in Fig. 8.3 is based on a six-wheeled rocker-
bogie mechanism. This mechanism with wheel formula (total number of wheels
(6) × number of actuated wheels (6) × number of steerable actuated wheels (4) =
144) allows the robot to navigate in uneven terrain and rotate 360 degree at zero
radius. The robot can sense the presence of hazardous gases in the environment and
map the locations of detected gases in real-time using GPS.
The prototype robot was tested for recognizing hazardous gases like carbon
dioxide, liquefied petroleum gas, and vaporized alcohol gas. A handheld control
unit was developed to have manual control of robot navigation and, receive infor-
mation about the type and concentration of the gases under study, presence of human
beings in proximity to the robot and the GPS location of the robot. The designed
robot could avoid collision with obstacles while navigating using ultrasonic sensors.
A neural network-based classifier was implemented to recognize the gases with an
average accuracy of 98%. All the communication between the robot and the hand-
held control device was accomplished using Zigbee modules. Working video of this
project is available at “https://www.youtube.com/watch?v=jDRiWCqq5Yo”.
106 8 Academic Projects and Tools in Robotics
Figure 8.4 shows the experimental setup of EMG-based prosthetic hand control.
EMG-based prosthetic hand control architecture was implemented in three phases.
In the first phase, the prosthetic hand initially remained in open state. Raw EMG data
were collected from the biceps branchii muscles of the subjects using an EMG unit
through Ag/AgCl electrodes. The instrumentation amplifier amplifies EMG with a
gain of 100 and common mode rejection ratio (CMRR) of 110 dB. The acquired
EMG was passed through a filter with pass band of 10–500 Hz. In the second phase,
the pre-processed EMG was digitized with a 10-bit analog to digital converter
(ADC) at 8.2 KHz. The digitized EMG was processed in the controller wherein
windowing, feature extraction, and recognition of user’s intention for grasping were
accomplished. EMG was segmented with a 50 milli-second window. Root mean
square (RMS), a time-domain feature, has been used to quantify the EMG. RMS was
chosen as feature because it reflects the physiological activity in the motor unit
during contraction. A finite state algorithm (FSA) was implemented to understand
the user’s intention for grasping operation. The choice of FSA was based on the fact
that it involved lesser computational complexity as compared to the machine
Fig. 8.4 (a) Experimental setup of EMG-based prosthetic hand control. (b) Data glove equipped
with force sensors
108 8 Academic Projects and Tools in Robotics
learning algorithms and thereby was suited for real-time embedded applications. In
the third phase, on recognizing user’s intention for grasping, the controller com-
mands the actuators for grasping by the prosthetic hand. Experiments have been
accomplished in four sessions, each with 20 trials, by five subjects in both sitting and
standing positions. It has been found that the prosthetic hand can perform grasping
with an average accuracy of 96.2 ± 2.6%. The controller satisfied the neuromuscular
constraint enabling the prosthetic hand to perform grasping operation in
250.80 ± 1.1 milliseconds, which is comparable to the time required by human
hands, i.e., 300 milliseconds.
Figure 8.5 shows grasping of objects in real time by the prosthetic hand. Real-time
experiments with the embedded EMG controller were performed by the subjects in
four sessions for grasping the four objects under study. During this, subjects were
either in sitting or standing positions randomly. The grasping performances have
been evaluated in terms of grasping accuracy, success to EMG intensity ratio, and
grasping time.
Fig. 8.5 Prosthetic hand grasping the objects in real time (a) prosthetic hand grasping a plastic
container, (b) subject wearing the prosthetic hand, (c) prosthetic hand grasping a coffee mug, (d)
prosthetic hand grasping a cricket ball, (e) prosthetic hand grasping a screw-driver box
8.4 A Real-Time EMG-Based Prosthetic Hand with Human-Like Grasping 109
Figure 8.6 shows the grasping accuracy average across the subjects while trying
to grasp the four objects. It has been found that the subjects could use the prosthetic
hand with an average grasping accuracy of 96.8 ± 3% in sitting position and
95.6 ± 2.2% in standing position.
N
i AðiÞ%
P= ð8:2Þ
I EMG
Fig. 8.6 Grasping accuracy for the four objects averaged across all subjects
110 8 Academic Projects and Tools in Robotics
where N represents the total number of sessions. The average EMG intensity
represented as IEMG is defined as in Eq. 8.3.
1 T eðiÞ - emin
I EMG = ð8:3Þ
N i = 1 emax- emin
where e(i) is the ith sample of EMG, T represents the total number of trials during the
experiment, emin and emax are minimum and maximum value of EMG, respec-
tively. Figure 8.7 shows the success to EMG intensity ratio for five subjects across
four sessions of 20 trials each while grasping object by the prosthetic hand. Success
refers to the grasping accuracy following Eq. 8.1. EMG intensity depicts the sub-
jects’ effort to prevent dropping of object during grasping supported with visual
feedback. Observing the variation of the success to EMG intensity ratio compared to
the average accuracy shown in Fig. 8.6, it could be understood that EMG intensity or
users’ effort was made to prevent dropping of different objects.
Two-way analysis of variance (ANOVA) was performed for perusal of this
variation. Table 8.2 shows the results of ANOVA of grasping accuracy vis-à-vis
success to EMG intensity ratio. The significant p-value indicates a statistically
significant variation between grasping accuracy and success to EMG intensity for
each subject. This signifies that the users have put different efforts in terms of EMG
intensity for maintaining a stable grasp without dropping the objects and thereby
ensuring a good grasping accuracy through visual bio-feedback.
8.4 A Real-Time EMG-Based Prosthetic Hand with Human-Like Grasping 111
The grasping time required by the prosthetic hand was estimated using the data glove
shown in Fig. 8.4 while grasping the four objects. Six force sensors were equipped in
the data glove each for measuring the contact forces of the five fingers, i.e., thumb,
index, middle, ring, and little along with the palm. Another controller was used
externally to read the electrical signals from the force sensors for indicating a stable
grasping of an object. A minimum force of 3N in any two or more sensors signifies a
successful grasp close action by the prosthetic hand. The time elapsed method was
utilized to record the time instants of EMG received by the EMG unit for a grasping
action and the stable grasping of the object by the prosthetic hand in real time. The
time difference hence calculated serves as the grasping time by the prosthetic hand
controller. Figure 8.8 represents the estimated grasping time by the prosthetic hand
calculated using the data glove for the four objects. The grasping times were
estimated to be 251.9 ± 1.1 milliseconds, 248.8 ± 1.4 milliseconds, 250.9 ± 0.9 mil-
liseconds, and 251.7 ± 0.8 milliseconds corresponding to the four objects. The
typical grasping time by the prosthetic hand controller for any object was calculated
as 250.8 ± 1.1 milliseconds. The working video of this project is available at “https://
www.youtube.com/watch?v=22JZ4rmPoCc”.
112 8 Academic Projects and Tools in Robotics
The human hand performs multiple grasp types during daily living activities.
Adaptation of grasping force to avoid object slippage as employed by human brain
has been postulated as an intelligent approach. Recently, research for prosthetic
hands with human-like capabilities has been followed by many researchers, but with
limited success. Advanced prosthetic hands that can perform different grasp types
use multiple EMG channels. This causes the user to wear a greater number of
electrodes leading to inconveniences and with inadequate grasping accuracy by the
prosthetic hands.
This project reports a prosthetic hand performing 16 grasp types in real time using
a single-channel EMG customized with an Android application Graspy. An embed-
ded EMG-based grasping controller with a network of force-sensing resistors and
kinematic sensors prevent slipping and breaking of grasping objects. Experiments
were conducted with four able-bodied subjects for performing 16 grasp types and
individual finger movements. A proportional-integral-derivative (PID) algorithm
was implemented to regulate kinematic constraints of the prosthetic hand fingers
in linear relation to the force sensing resistors. The control algorithm can prevent
slipping and breaking of grasping objects with a finger joint angle reproduction
precision of 0.16°. The hand could perform grasping of objects like tennis ball,
cookie, knife, screw-driver, water bottle, egg, pen, and plastic container, while
emulating the 16-grasp types with an accuracy of 100%. The experimental setup is
shown in Fig. 8.9. The hand grasping four objects is shown in Fig. 8.10.
Fig. 8.9 Experimental setup: 101: active EMG electrodes; 102: reference EMG electrode; 103:
wireless receiver; 104: PID controllers; 105: instrumentation amplifier and band pass filter; 106:
microcontroller; 107: electrical power source (3300 mAh); and 108: force sensors
8.6 Tools and Equipment 113
Fig. 8.10. Prosthetic hand grasping objects like tennis ball (a), cookie (b), egg (c), and Rubik’s
cube (d).
Experiments were conducted on the developed prosthetic hand with four subjects
for testing multiple grasp performance accuracy and adaptive capability of the
prosthetic hand.
Figure 8.11 shows the time-line parameters associated with prosthetic hand
control architecture. Figure 8.11a shows RMS values of real-time pre-processed
EMG in millivolts containing two events of MVC by the user. Simultaneously,
Fig. 8.11b, c reflects individual fingertip forces on the grasped object and MCP joint
angles of the individual fingers. It can be seen from this figure that while performing
a grasp type, finger joint angles were adjusted depending upon the force on their
respective fingertip. Figure 8.11d, e displays instances of grasp close and grasp open
by the prosthetic hand.
Fig. 8.11 (a) EMG from bicep brachii muscle actuating the grasping presented in (d), (b) force
generated at fingertips during the grasping presented in (d), (c) MCP joint angles of the five fingers
during grasping presented in (d), (d) grasping by the prosthetic hand (e), grasp open by the
prosthetic hand actuated by the EMG
8.6 Tools and Equipment 115
2. Table vice: A table vice is a tool used for holding objects to facilitates cutting
and drilling operations on it. Objects of different sizes can be held firmly by
adjusting the vice jaw through a screw mechanism.
3. Screwdriver: A screwdriver is used to carry out screwing of miniaturized, small,
and medium fastenings or loosening of screws of different sizes. It can be either
electrically powered or manually operated.
4. Wrenches: Wrenches are used to tighten or fasten bolts and nuts of various sizes.
5. Saw: It is used for cutting and is a very important tool for building a robot. The
common blade size ranges from 10 to 12 inches in length.
6. Vernier caliper: A Vernier caliper allows marking out and measurement up to a
precision of 0.01 mm.
7. File: A file is used for smoothening the rough edges of a work-piece. Different
types of files like round, half-round, and flat are available for specific
applications.
8. Centre punch: A center punch is used for accurate marking of holes to be drilled.
9. Drill press: A drill press is a tool that is used for accurate drilling of the marked
holes. It can either be electrically powered or manually operated.
10. Utility knives: Utility knives are also called carpet knives used to cut plastic,
rubber, paper, and other soft materials.
11. Hot glue guns: Hot glue guns are used for gluing parts quickly. They are used in
a number of applications like attaching two work-pieces for properly routing the
electrical connectors and protect the circuits from damage by exposure to
moisture and water.
12. Safety goggles: Safety goggles are an important robotic tool used to protect the
eyes as the fine particles that come out abruptly while working on a machine tool
are dangerous to the eyes.
Electrical and electronic tools are used for developing the power supply and control
units of robotic systems such as driver, controller, and buffer circuits. Some of the
commonly used electrical and electronic tools are:
1. Soldering iron: A soldering iron is used for all routine soldering of electronic and
electrical parts. Anti-static wrist strap is a commonly used tool while using
soldering iron to prevent the buildup of static electricity near sensitive electronics
components.
2. Soldering gun: A soldering gun is a pistol-shaped, electrically powered tool used
for soldering metal work pieces. It uses a tin-based solder to build a strong
mechanical bond.
3. Bench power supply: A bench power supply is used to supply consistent voltage
to provide biasing voltage to a circuit during development and testing of robots.
4. Sensors: Sensors are used for converting a physical phenomenon to equivalent
electrical signal. Different types of sensors used in robotics are light sensors,
116 8 Academic Projects and Tools in Robotics
Software tools are used for design and analysis of robotic systems. It is very useful as
the behavior and functioning of a robotic system can be evaluated before testing with
a physical system. Further, these tools are used in teaching insights of the function-
ing of robotic principles. Some of the commonly used software tools are:
1. RoboAnalyzer: It is a licensed software used for teaching and learning robotic
concepts. It is a 3D model-based software that uses a virtual platform for
modeling the robots. In this tool, uses of translational and rotational matrices,
Denavit-Hertenberg parameters, kinematic and dynamic analysis can be realized
and visualized in practice. It has the feature of integrating the modeled robots to
various other software tools such as MATLAB, MS-Excel, and other applications
via a COM interface. This toolbox along with tutorials is available at www.
roboanalyzer.com.
2. Peter Corke’s robotics toolbox: It is a MATLAB-based robotics toolbox. The
toolbox contains functions and classes to represent orientation and pose in as
matrices. It uses MATLAB functions for representing kinematics and dynamics
of serial-link manipulators. The tool box is available at https://petercorke.com/
toolboxes/robotics-toolbox/.
3. jmeSim: It is an open-source, multi-platform robotic simulation tool. This
robotic software tool is a JAVA-based robot simulator that provides excellent
graphical and physical fidelity. It has integration with Robot Operating System
(ROS). It comprises a bundle of sensors such as thermal camera, depth camera,
sonar sensor, and laser range finder along with many actuators for simulation of
robotic systems. A desired simulation environment can also be created using
this tool.
8.6 Tools and Equipment 117
8.6.4 Equipment
With the advances in automation technology, many tiresome tasks involved in the
development of robotic systems have become much easier and simpler using equip-
ment. Three such equipment are:
1. 3D printer: A 3D printer can construct computer-aided 3D model robotic systems
or their parts using fused deposition technique. It uses slicing software to convert
the modeled 3D structures into geometric codes. These codes on being fed to the
3D printer are interpreted by its controller. The controller commands the actuators
and the nozzle to perform the deposition of fused filament layer by layer devel-
oping the physical model.
2. PCB printer: A PCB printer allows the user to print a circuit diagram developed
in a PCB-design software, which, in turn, allows the user to assemble components
on the printed circuit board. Development of a printed circuit board using PCB
printer is much faster and easier compared to the traditional method. Each PCB
printer includes an inkjet print head that can dispense fine droplets of conductive
ink and insulating ink, allowing to create multi-layered rigid and flexible circuits
on FR-4, Kapton, or any substrate of the developer’s choice. In addition, each
printer has two heads, one for dispensing glue and solder paste and another to
pick-and-place components to be used in the circuit.
3. CNC machine: Computer Numerical Control (CNC) machines are automated
systems that utilize computer programming to control the movement and opera-
tion of cutting tools, allowing for highly accurate and repeatable processes. The
core of CNC operation lies in the CNC controller, an embedded system that
interprets the design specifications and converts them into a language, which the
machine can understand. To operate a CNC machine, usually the process begins
with the creation of a digital design using Computer-Aided Design (CAD)
software. The designer then translates this design into a CNC compatible format,
generating a G-code program. G-code is a series of alphanumeric commands that
specify the machine’s movements, tool changes, speeds, and other parameters.
For example, a G01 command indicates linear motion, while G02 and G03
commands specify circular motions. The G-code program essentially serves as
a set of instructions that guides the CNC machine through the entire machining
process. Once the G-code program is ready, it is loaded into the control unit of
CNC machine. The CNC controller interprets the G-code and sends signals to the
motors and actuators, directing the movement of the cutting tool or the work
piece. CNC machines typically operate on multiple axes, such as X, Y, and Z,
allowing for precise control over the position of the tools in three-dimensional
space. During operation, sensors and feedback mechanisms are often integrated
into CNC machines to ensure accuracy. These may include encoders to monitor
the position of each axis, probes to measure work piece dimensions, and other
sensors for temperature or tool wear. This level of automation enhances effi-
ciency, reduces human error, and facilitates the production of intricate compo-
nents that would be challenging or impossible to manufacture manually.
118 8 Academic Projects and Tools in Robotics
The examples cited in this chapter are to inspire the students for building and
creating robots using the knowledge acquired from the earlier chapters. This will
guide the students to think, plan, design, and develop their projects following a
systematic approach. Further, the course instructors can follow these examples to
design new projects for the effective learning of the students engaged in solving real-
world problems through their projects. More such projects can be explored at www.
tezu.ernet.in/erl.
Index
© The Editor(s) (if applicable) and The Author(s), under exclusive license to 119
Springer Nature Switzerland AG 2024
N. M. Kakoty et al., Introduction to Embedded Systems and Robotics,
https://doi.org/10.1007/978-3-031-73098-6
120 Index
E I
Educational robots, 17 Inductive sensor, 20
Electrical signal, 19 Industrial robot, 16
Electrically EPROM (EEPROM), 4 Infrared sensor, 20
Electromyogram (EMG), 102, 103, 107 Input-output ports, 5
Embedded C, 39, 40 Instruction set, 36, 37
Embedded memory microcontroller, 37 Integrated circuit, 44
Embedded system, 35, 101 Integrated development environment (IDE), 6
Emotion sensor, 24 Internet of Things (IoT), 9
EMYS, 17
Encoder, 20, 22
End-effector, 13 J
Entertainment robots, 17 Joint, 13, 20
Erasable PROM (EPROM), 4 Joint angle, 56
Extension movement, 103 Joint offset, 56
Exteroceptive sensors, 20
K
F Kinematics, 15
Feedback, 87, 89 KUKA robot, 16
Firmware, 5
Fixed frame, 52
FLASH memory, 4, 36 L
Flexion movement, 103 Laws of robotics, 12
Frame, 51 Learning model, 102
Fuzzy classifier, 102 Lego Mindstorms, 17
Light sensor, 22
Linear actuator, 27
G Linear systems, 86
Gantt chart, 105 Link, 13
Gas sensor, 25, 104, 105 Link length, 56
General-purpose system, 1 Local frame, 52
Global frame, 52 Local variable, 42
Global variable, 42 Logistic robots, 16
GPIO, 45
GPS module, 105
Grasping accuracy, 109 M
GraspIt!, 17 Machine code, 1
Grasp planning, 102 Magnetic sensor, 24
Medical robots, 17
Memory, 4, 36, 39, 40
H Microcontroller, 4, 35, 41, 44
Harvard memory architecture, 37 Microprocessor, 4
HEX code, 38 Military robots, 17
Homogeneous transformation matrix, 55 MIMO systems, 86
Household appliances, 2 Mobile robot, 104, 105
Human detection sensor, 104 Model-based architecture, 91
Human finger anatomy, 103 Motor, 20, 27
Hybrid approach, 93 Motor unit, 107
Hydraulic actuator, 31 Moving frame, 52
Multi-core processors, 2
Index 121
N Reactive approach, 92
NAO robot, 17 Read-only memory (ROM), 4, 36
Networked embedded systems, 3 Real-time embedded systems, 3
Neural network, 105 Real-time operating system (RTOS), 3
Non-volatile memory, 4, 36 Registers, 36
Rehabilitation, 102
Reliability, 7
O Resolution, 26
Object shape adaptability, 102 RISC processor, 4, 37
Obstacle detection, 104, 105 RoboAnalyzer, 17
Open-loop, 86, 88 Robonaut, 17
Operating system, 5 RoboSapien, 17
Orientation, 13, 51 Robotic arm, 13, 14
Robotic control, 87
Robotic hand, 102
P Robot terminology, 13
Passive degrees of freedom, 102 Root mean square (RMS), 107
Passive infrared (PIR) sensor, 21 Rossum’s Universal Robot, 11
Peripheral devices, 36 Rotary encoder, 22
Photovoltaic cells, 22 Rotation, 52
PIC microcontrollers, 46 Rotational actuator, 27
PID controller, 89 Rotational matrix, 53, 55
Piezo-resistive sensing, 23
PIR sensor, 21
Planning, 90, 93 S
Pneumatic actuator, 30 SCADA systems, 2
Polar angle, 51 Sensor, 8, 16, 19
Polar coordinate system, 51, 52 Sensory feedback, 102
Port, 41, 44 Serial communication port, 36
Pose, 13, 51 Service robots, 17
Position sensor, 13, 24, 51, 102 Servo motor, 28
Potentiometric sensor, 20 SISO systems, 86
Power management, 7 Skeletal muscles, 102
Power supply circuit, 32 Smartphone devices, 2
Predictive control, 89 SoftBank Robotics, 17
Pre-processor directives, 40 Solenoid, 27
Pressure sensor, 23 Solid state drives, 5
Programmable ROM (PROM), 4 Sound sensor, 25
Programmer, 38 Space robots, 17
Programming language, 38, 39 Spatial description, 53
Proportional controller, 102 Stand-alone embedded systems, 3
Proprioceptive sensors, 20 Static and dynamic constraints, 104
Prosthetic finger prototype, 104 Stepper motor, 29
Prosthetic hands, 102, 106 Stochastic systems, 86
Prosthetic limbs, 17 Subprogram, 41, 43
Subsumption architecture, 92
System on chips, 2
Q
QRIO, 17
T
Temperature, 19
R Tendon mechanism, 104
Rain drop sensor, 25 Thermal sensor, 23
Random access memory (RAM), 4, 36, 44 Three-fingered hand, 102
122 Index
Thymio, 17 Variables, 39
Timer, 5, 36 Vex robotic system, 17
Time-varying systems, 86 Von Neumann memory architecture, 37
Torque, 26
Touch sensor, 23, 102
Translational matrix, 55 W
Twist angle, 56 Water flow sensor, 25
Types of control systems, 85 Width of data bus, 36
Workspace, 14
U
Ultrasonic sensor, 21, 105 Y
USB ports, 5 YFS201 water flow sensor, 25
V Z
VAN Robotics, 17 ZigBee module, 105