0% found this document useful (0 votes)
69 views208 pages

Neutrosophic Multi-Criteria Decision Making

The notion of a neutrosophic quadruple BCK/BCI-number is considered, and a neutrosophic quadruple BCK/BCI-algebra, which consists of neutrosophic quadruple BCK/BCI-numbers, is constructed. Several properties are investigated, and a (positive implicative) ideal in a neutrosophic quadruple BCK-algebra and a closed ideal in a neutrosophic quadruple BCI-algebra are studied.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views208 pages

Neutrosophic Multi-Criteria Decision Making

The notion of a neutrosophic quadruple BCK/BCI-number is considered, and a neutrosophic quadruple BCK/BCI-algebra, which consists of neutrosophic quadruple BCK/BCI-numbers, is constructed. Several properties are investigated, and a (positive implicative) ideal in a neutrosophic quadruple BCK-algebra and a closed ideal in a neutrosophic quadruple BCI-algebra are studied.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 208

axioms

Neutrosophic
Multi-Criteria
Decision Making
Edited by
Florentin Smarandache, Jun Ye and Yanhui Guo
Printed Edition of the Special Issue Published in Axioms

www.mdpi.com/journal/axioms
Neutrosophic Multi-Criteria
Decision Making
Neutrosophic Multi-Criteria
Decision Making

Special Issue Editors


Florentin Smarandache
Jun Ye
Yanhui Guo

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade


Special Issue Editors
Florentin Smarandache Jun Ye Yanhui Guo
University of New Mexico Shaoxing University University of Illinois at Springfield
USA China USA

Editorial Office
MDPI
St. Alban-Anlage 66
Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal Axioms
(ISSN 2075-1680) from 2017 to 2018 (available at: http://www.mdpi.com/journal/axioms/special
issues/neutrosophic mcdm)

For citation purposes, cite each article independently as indicated on the article page online and as
indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. Journal Name Year, Article Number,
Page Range.

ISBN 978-3-03897-288-4 (Pbk)


ISBN 978-3-03897-289-1 (PDF)

Articles in this volume are Open Access and distributed under the Creative Commons Attribution
(CC BY) license, which allows users to download, copy and build upon published articles even for
commercial purposes, as long as the author and publisher are properly credited, which ensures
maximum dissemination and a wider impact of our publications. The book taken as a whole is

c 2018 MDPI, Basel, Switzerland, distributed under the terms and conditions of the Creative
Commons license CC BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Contents

About the Special Issue Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Preface to ”Neutrosophic Multi-Criteria Decision Making” . . . . . . . . . . . . . . . . . . . . . ix

Young Bae Jun, Seok-Zun Song, Florentin Smarandache and Hashem Bordbar
Neutrosophic Quadruple BCK/BCI-Algebras
Reprinted from: Axioms 2018, 7, 41, doi: 10.3390/axioms7020041 . . . . . . . . . . . . . . . . . . . 1

Muhammad Akram, Shumaiza and Florentin Smarandache


Decision-Making with Bipolar Neutrosophic TOPSIS and Bipolar Neutrosophic ELECTRE-I
Reprinted from: Axioms 2018, 7, 33, doi: 10.3390/axioms7020033 . . . . . . . . . . . . . . . . . . . 17

Young Bae Jun, Seon Jeong Kim and Florentin Smarandache


Interval Neutrosophic Sets with Applications in BCK/BCI-Algebra
Reprinted from: Axioms 2018, 7, 23, doi: 10.3390/axioms7020023 . . . . . . . . . . . . . . . . . . . 34

Surapati Pramanik, Partha Pratim Dey, Florentin Smarandache and Jun Ye


Cross Entropy Measures of Bipolar and Interval Bipolar Neutrosophic Sets and Their
Application for Multi-Attribute Decision-Making
Reprinted from: Axioms 2018, 7, 21, doi: 10.3390/axioms7020021 . . . . . . . . . . . . . . . . . . . 47

Muhammad Akram, Sundas Shahzadi and Florentin Smarandache


Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information
Reprinted from: Axioms 2018, 7, 19, doi: 10.3390/axioms7010019 . . . . . . . . . . . . . . . . . . . 72

Muhammad Akram, Hafsa M. Malik, SundasShahzadi and Florentin Smarandache


Neutrosophic Soft Rough Graphs with Application
Reprinted from: Axioms 2018, 7, 14, doi: 10.3390/axioms7010014 . . . . . . . . . . . . . . . . . . . 96

Jun Ye, Wenhua Cui and Zhikang Lu


Neutrosophic Number Nonlinear Programming Problems and Their General Solution Methods
under Neutrosophic Number Environments
Reprinted from: Axioms 2018, 7, 13, doi: 10.3390/axioms7010013 . . . . . . . . . . . . . . . . . . . 123

Kalyan Mondal, Surapati Pramanik, Bibhas C. Giri and Florentin Smarandache


NN-Harmonic Mean Aggregation Operators-Based MCGDM Strategy in a Neutrosophic
Number Environment
Reprinted from: Axioms 2018, 7, 12, doi: 10.3390/axioms7010012 . . . . . . . . . . . . . . . . . . . 132

Sidra Sayed, Nabeela Ishfaq, Muhammad Akram and Florentin Smarandache


Rough Neutrosophic Digraphs with Application
Reprinted from: Axioms 2018, 7, 5, doi: 10.3390/axioms7010005 . . . . . . . . . . . . . . . . . . . 148

Young Bae Jun, Florentin Smarandache, Seok-Zun Song andMadad Khan


Neutrosophic Positive Implicative N -Ideals in BCK-Algebras
Reprinted from: Axioms 2018, 7, 3, doi: 10.3390/axioms7010003 . . . . . . . . . . . . . . . . . . . 168

Ümit Budak, Yanhui Guo, Abdulkadir Şengür and Florentin Smarandache


Neutrosophic Hough Transform
Reprinted from: Axioms 2017, 6, 35, doi: 10.3390/axioms6040035 . . . . . . . . . . . . . . . . . . . 181

v
About the Special Issue Editors
Florentin Smarandache, polymath, professor of mathematics, scientist, writer, and artist. He got his
M. Sc. in Mathematics and Computer Science from the University of Craiova, Romania, and his Ph. D
in Mathematics from the State University of Kishinev and pursued Post-Doctoral studies in Applied
Mathematics at Okayama University of Sciences, Japan. He is the founder of neutrosophic set,
logic, probability, and statistics and, since 1995, has published hundreds of papers on neutrosophic
physics, superluminal and instantaneous physics, unmatter, absolute theory of relativity, redshift and
blueshift due to the medium gradient and refraction index besides the Doppler effect, paradoxism,
outerart, neutrosophy as a new branch of philosophy, Law of Included Multiple-Middle, degree
of dependence and independence between the neutrosophic components, refined neutrosophic
over-under-off-set, neutrosophic overset, neutrosophic triplet and duplet structures, DSmT, and so
on in numerous peer-reviewed international journals and books and he has presented papers and
plenary lectures in many international conferences around the world.

Jun Ye is currently a professor in the Department of Electrical and Information Engineering,


Shaoxing University, China. He has more than 30 years of experience in teaching and research.
His research interests include soft computing, fuzzy decision-making, intelligent control, robotics,
pattern recognitions, rock mechanics, and fault diagnosis. He has published more than 200 papers
in journals.

Yanhui Guo, Assistant Professor, received his B.S. degree in Automatic Control from Zhengzhou
University, China, his M.S. degree in Pattern Recognition and Intelligence System from Harbin
Institute of Technology, China, and his Ph.D. degree from Utah State University, Department of
Computer Science, USA. He was a research fellow in the Department of Radiology at the University
of Michigan and an assistant professor in St. Thomas University. Dr. Guo is currently an assistant
professor in the Department of Computer Science at the University of Illinois at Springfield. Dr. Guo
has published more than 100 research papers, has completed more than 10 grant-funded research
projects, and has worked as an associate editor for different international journals and as a reviewer
for top journals and conferences. His research area includes computer vision, machine learning, data
analytics, computer-aided detection/diagnosis, and computer-assisted surgery.

vii
Preface to “Neutrosophic Multi-Criteria
Decision Making”
The notion of a neutrosophic quadruple BCK/BCI-number is considered in the first article
(“Neutrosophic Quadruple BCK/BCI-Algebras”, by Young Bae Jun, Seok-Zun Song, Florentin
Smarandache, and Hashem Bordbar), and a neutrosophic quadruple BCK/BCI-algebra, which
consists of neutrosophic quadruple BCK/BCI-numbers, is constructed. Several properties are
investigated, and a (positive implicative) ideal in a neutrosophic quadruple BCK-algebra and a
closed ideal in a neutrosophic quadruple BCI-algebra are studied. Given subsets A and B of a
BCK/BCI-algebra, the set NQ(A,B), which consists of neutrosophic quadruple BCK/BCI-
numbers with a condition, is established. Conditions for the set NQ(A,B) to be a (positive
implicative) ideal of a neutrosophic quadruple BCK-algebra are provided, and conditions for
the set NQ(A,B) to be a (closed) ideal of a neutrosophic quadruple BCI-algebra are given.
Techniques for the order of preference by similarity to ideal solution (TOPSIS) and
elimination and choice translating reality (ELECTRE) are widely used methods to solve multi-
criteria decision-making problems. In the second research article (“Decision-Making with
Bipolar Neutrosophic TOPSIS and Bipolar Neutrosophic ELECTRE-I”), Muhammad Akram,
Shumaiza, and Florentin Smarandache present the bipolar neutrosophic TOPSIS method and
the bipolar neutrosophic ELECTRE-I method to solve such problems. The authors use the
revised closeness degree to rank the alternatives in the bipolar neutrosophic TOPSIS method.
The researchers describe the bipolar neutrosophic TOPSIS method and the bipolar neutrosophic
ELECTRE-I method by flow charts, also solving numerical examples by the proposed methods
and providing a comparison of these methods.
In the third article (“Interval Neutrosophic Sets with Applications in BCK/BCI-Algebra”,
by Young Bae Jun, Seon Jeong Kim and Florentin Smarandache), the notion of
(T(i,j),I(k,l),F(m,n))-interval neutrosophic subalgebra in BCK/BCI-algebra is introduced for
i,j,k,l,m,n infoNumber 1,2,3,4, and properties and relations are investigated. The notion of
interval neutrosophic length of an interval neutrosophic set is also introduced, and the related
properties are investigated.
The bipolar neutrosophic set is an important extension of the bipolar fuzzy set. The
bipolar neutrosophic set is a hybridization of the bipolar fuzzy set and the neutrosophic set.
Every element of a bipolar neutrosophic set consists of three independent positive membership
functions and three independent negative membership functions. In the fourth paper (“Cross-
Entropy Measures of Bipolar and Interval Bipolar Neutrosophic Sets and Their Application for
Multi-Attribute Decision-Making”), Surapati Pramanik, Partha Pratim Dey, Florentin
Smarandache, and Jun Ye develop cross-entropy measures of bipolar neutrosophic sets and
prove their basic properties. They also define cross-entropy measures of interval bipolar
neutrosophic sets and prove their basic properties. Thereafter, they develop two novel multi-
attribute decision-making strategies based on the proposed cross-entropy measures. In the
decision-making framework, the authors calculate the weighted cross entropy measures
between each alternative and the ideal alternative to rank the alternatives and choose the best
one, solving two illustrative examples of multi-attribute decision-making problems and
comparing the obtained result with the results of other existing strategies to show the

ix
applicability and effectiveness of the developed strategies. At the end, the main conclusion and
future scope of research are summarized.
Soft sets (SSs), neutrosophic sets (NSs), and rough sets (RSs) are different mathematical
models for handling uncertainties, but they are mutually related. In the fifth research paper
(“Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information”),
Muhammad Akram, Sundas Shahzadi, and Florentin Smarandache introduce the notions of soft
rough neutrosophic sets (SRNSs) and neutrosophic soft rough sets (NSRSs) as hybrid models
for soft computing. The researchers describe a mathematical approach to handle decision-
making problems in view of NSRSs and also present an efficient algorithm of the proposed
hybrid model to solve decision-making problems.
Neutrosophic sets (NSs) handle uncertain information, while fuzzy sets (FSs) and
intuitionistic fuzzy sets (IFs) fail to handle indeterminate information. Soft set theory,
neutrosophic set theory, and rough set theory are different mathematical models for handling
uncertainties and they are mutually related. The neutrosophic soft rough set (NSRS) model is a
hybrid model combining neutrosophic soft sets with rough sets. Muhammad Akram, Hafsa M.
Malik, Sundas Shahzadi, and Florentin Smarandache apply neutrosophic soft rough sets to
graphs in the sixth research paper (“Neutrosophic Soft Rough Graphs with Application”),
introducing the idea of neutrosophic soft rough graphs (NSRGs) and describing different
methods for their construction. The authors consider the application of NSRG in decision-
making problems, developing, in particular, efficient algorithms to solve decision-making
problems.
In practical situations, one often has to handle programming problems involving
indeterminate information. Building on the concepts of indeterminacy I and neutrosophic
number (NN) (z = p+ qI for p,q infoNumber R), the seventh paper (“Neutrosophic Number
Nonlinear Programming Problems and Their General Solution Methods under Neutrosophic
Number Environments”, by Jun Ye, Wenhua Cui, and Zhikang Lu) introduces some basic
operations of NNs and concepts of NN nonlinear functions and inequalities. These functions
and/or inequalities contain indeterminacy I and naturally lead to a formulation of NN nonlinear
programming (NN-NP). These techniques include NN nonlinear optimization models for
unconstrained and constrained problems and their general solution methods. Additionally,
numerical examples are provided to show the effectiveness of the proposed NN-NP methods. It
is obvious that the NN-NP problems usually yield NN optimal solutions, but not always. The
possible optimal ranges of the decision variables and NN objective function are indicated when
the indeterminacy I is considered for possible interval ranges in real situations. A neutrosophic
number (a + bI) is a significant mathematical tool to deal with indeterminate and incomplete
information which generally exists in real-world problems, where a and bI denote the
determinate component and indeterminate component, respectively. Kalyan Mondal, Surapati
Pramanik, Bibhas C. Giri, and Florentin Smarandache define score functions and accuracy
functions for ranking neutrosophic numbers in the eighth paper, entitled “NN-Harmonic Mean
Aggregation Operators-Based MCGDM Strategy in a Neutrosophic Number Environment”. The
authors then define a cosine function to determine the unknown weight of the criteria. The
researchers define the neutrosophic number harmonic mean operators and prove their basic
properties. Then, they develop two novel multi-criteria group decision-making (MCGDM)
strategies using the proposed aggregation operators, solving a numerical example to
demonstrate the feasibility, applicability, and effectiveness of the two proposed strategies.

x
Sensitivity analysis with the variation of “I” on neutrosophic numbers is performed to
demonstrate how the preference for the ranking order of alternatives is sensitive to the change
of “I”. The efficiency of the developed strategies is ascertained by comparing the results
obtained from the proposed strategies with the results obtained from the existing strategies in
the literature.
A rough neutrosophic set model is a hybrid model, which deals with vagueness by using
the lower and upper approximation spaces. In the ninth research paper (“Rough Neutrosophic
Digraphs with Application”), Sidra Sayed, Nabeela Ishfaq, Muhammad Akram, and Florentin
Smarandache apply the concept of rough neutrosophic sets to graphs, introducing rough
neutrosophic digraphs and describing methods of their construction. Moreover, the researchers
present the concept of self-complementary rough neutrosophic digraphs and, finally, consider
an application of rough neutrosophic digraphs in decision-making.
The notion of a neutrosophic positive implicative N-ideal in BCK-algebras is introduced,
and several properties are investigated in the tenth paper (“Neutrosophic Positive Implicative
N-Ideals in BCK-Algebras”, by Young Bae Jun, Florentin Smarandache, Seok-Zun Song, and
Madad Khan). Relations between a neutrosophic N-ideal and a neutrosophic positive
implicative N-ideal are discussed. Characterizations of a neutrosophic positive implicative N-
ideal are considered. Conditions for a neutrosophic N-ideal to be a neutrosophic positive
implicative N-ideal are provided. An extension property of a neutrosophic positive implicative
N-ideal based on the negative indeterminacy membership function is discussed.
Hough transform (HT) is a useful tool for both pattern recognition and image processing
communities. In regard to pattern recognition, it can extract unique features for the description
of various shapes, such as lines, circles, ellipses, etc. In regard to image processing, a dozen of
applications can be handled with HT, such as lane detection for autonomous cars, blood cell
detection in microscope images, and so on. As HT is a straightforward shape detector in a given
image, its shape detection ability is low in noisy images. To alleviate its weakness in noisy
images analysis and improve its shape-detection performance, in the eleventh paper
(“Neutrosophic Hough Transform”), ¨Umit Budak, Yanhui Guo, Abdulkadir S¸eng ¨ur, and
Florentin Smarandache propose neutrosophic Hough transform (NHT). As it was proved
earlier, neutrosophy theory-based image processing applications were successful in noisy
environments. To this end, the Hough space is initially transferred into the NS domain by
calculating the NS membership triples (T, I, and F). An indeterminacy filtering is constructed
where the neighborhood information is used in order to remove the indeterminacy in the
spatial neighborhood of neutrosophic Hough space. The potential peaks are detected on the
basis of thresholding on the neutrosophic Hough space, and these peak locations are then used
to detect the lines in the image domain. Extensive experiments on noisy and noise-free images
are performed in order to show the efficiency of the proposed NHT algorithm. The authors also
compare their proposed NHT with the traditional HT and fuzzy HT methods on a variety of
images. The obtained results show the efficiency of the proposed NHT for noisy images.

Florentin Smarandache, Jun Ye, Yanhui Guo


Guest Editors

xi
axioms
Article
Neutrosophic Quadruple BCK/BCI-Algebras
Young Bae Jun 1 , Seok-Zun Song 2, *, Florentin Smarandache 3 and Hashem Bordbar 4
1 Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea;
[email protected]
2 Department of Mathematics, Jeju National University, Jeju 63243, Korea
3 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
4 Department of Mathematics, Shahid Beheshti University, Tehran 1983963113, Iran;
[email protected]
* Correspondence: [email protected]

Received: 20 April 2018; Accepted: 18 May 2018; Published: 18 June 2018

Abstract: The notion of a neutrosophic quadruple BCK/BCI-number is considered, and a neutrosophic


quadruple BCK/BCI-algebra, which consists of neutrosophic quadruple BCK/BCI-numbers,
is constructed. Several properties are investigated, and a (positive implicative) ideal in a neutrosophic
quadruple BCK-algebra and a closed ideal in a neutrosophic quadruple BCI-algebra are studied.
Given subsets A and B of a BCK/BCI-algebra, the set NQ( A, B), which consists of neutrosophic
quadruple BCK/BCI-numbers with a condition, is established. Conditions for the set NQ( A, B) to be
a (positive implicative) ideal of a neutrosophic quadruple BCK-algebra are provided, and conditions for
the set NQ( A, B) to be a (closed) ideal of a neutrosophic quadruple BCI-algebra are given. An example
to show that the set {0̃} is not a positive implicative ideal in a neutrosophic quadruple BCK-algebra is
provided, and conditions for the set {0̃} to be a positive implicative ideal in a neutrosophic quadruple
BCK-algebra are then discussed.

Keywords: neutrosophic quadruple BCK/BCI-number; neutrosophic quadruple BCK/BCI-algebra;


neutrosophic quadruple subalgebra; (positive implicative) neutrosophic quadruple ideal

MSC: 06F35; 03G25; 08A72

1. Introduction
The notion of a neutrosophic set was developed by Smarandache [1–3] and is a more general platform
that extends the notions of classic sets, (intuitionistic) fuzzy sets, and interval valued (intuitionistic)
fuzzy sets. Neutrosophic set theory is applied to a different field (see [4–8]). Neutrosophic algebraic
structures in BCK/BCI-algebras are discussed in [9–16]. Neutrosophic quadruple algebraic structures
and hyperstructures are discussed in [17,18].
In this paper, we will use neutrosophic quadruple numbers based on a set and construct
neutrosophic quadruple BCK/BCI-algebras. We investigate several properties and consider ideals and
positive implicative ideals in neutrosophic quadruple BCK-algebra, and closed ideals in neutrosophic
quadruple BCI-algebra. Given subsets A and B of a neutrosophic quadruple BCK/BCI-algebra,
we consider sets NQ( A, B), which consist of neutrosophic quadruple BCK/BCI-numbers with a
condition. We provide conditions for the set NQ( A, B) to be a (positive implicative) ideal of a
neutrosophic quadruple BCK-algebra and for the set NQ( A, B) to be a (closed) ideal of a neutrosophic
quadruple BCI-algebra. We give an example to show that the set {0̃} is not a positive implicative ideal
in a neutrosophic quadruple BCK-algebra, and we then consider conditions for the set {0̃} to be a
positive implicative ideal in a neutrosophic quadruple BCK-algebra.

Axioms 2018, 7, 41; doi:10.3390/axioms7020041 1 www.mdpi.com/journal/axioms


Axioms 2018, 7, 41

2. Preliminaries
A BCK/BCI-algebra is an important class of logical algebras introduced by Iséki (see [19,20]).
By a BCI-algebra, we mean a set X with a special element 0 and a binary operation ∗ that satisfies
the following conditions:

(I) (∀ x, y, z ∈ X ) ((( x ∗ y) ∗ ( x ∗ z)) ∗ (z ∗ y) = 0);


(II) (∀ x, y ∈ X ) (( x ∗ ( x ∗ y)) ∗ y = 0);
(III) (∀ x ∈ X ) ( x ∗ x = 0);
(IV) (∀ x, y ∈ X ) ( x ∗ y = 0, y ∗ x = 0 ⇒ x = y).

If a BCI-algebra X satisfies the identity

(V) (∀ x ∈ X ) (0 ∗ x = 0),

then X is called a BCK-algebra. Any BCK/BCI-algebra X satisfies the following conditions:

(∀ x ∈ X ) ( x ∗ 0 = x ) (1)
(∀ x, y, z ∈ X ) ( x ≤ y ⇒ x ∗ z ≤ y ∗ z, z ∗ y ≤ z ∗ x ) (2)
(∀ x, y, z ∈ X ) (( x ∗ y) ∗ z = ( x ∗ z) ∗ y) (3)
(∀ x, y, z ∈ X ) (( x ∗ z) ∗ (y ∗ z) ≤ x ∗ y) (4)

where x ≤ y if and only if x ∗ y = 0. Any BCI-algebra X satisfies the following conditions (see [21]):

(∀ x, y ∈ X )( x ∗ ( x ∗ ( x ∗ y)) = x ∗ y), (5)


(∀ x, y ∈ X )(0 ∗ ( x ∗ y) = (0 ∗ x ) ∗ (0 ∗ y)). (6)

A BCK-algebra X is said to be positive implicative if the following assertion is valid.

(∀ x, y, z ∈ X ) (( x ∗ z) ∗ (y ∗ z) = ( x ∗ y) ∗ z) . (7)

A nonempty subset S of a BCK/BCI-algebra X is called a subalgebra of X if x ∗ y ∈ S for all


x, y ∈ S. A subset I of a BCK/BCI-algebra X is called an ideal of X if it satisfies

0 ∈ I, (8)
(∀ x ∈ X ) (∀y ∈ I ) ( x ∗ y ∈ I ⇒ x ∈ I ) . (9)

A subset I of a BCI-algebra X is called a closed ideal (see [21]) of X if it is an ideal of X which satisfies

(∀ x ∈ X )( x ∈ I ⇒ 0 ∗ x ∈ I ). (10)

A subset I of a BCK-algebra X is called a positive implicative ideal (see [22]) of X if it satisfies (8) and

(∀ x, y, z ∈ X )((( x ∗ y) ∗ z ∈ I, y ∗ z ∈ I ⇒ x ∗ z ∈ I ) . (11)

Observe that every positive implicative ideal is an ideal, but the converse is not true (see [22]).
Note also that a BCK-algebra X is positive implicative if and only if every ideal of X is positive
implicative (see [22]).
We refer the reader to the books [21,22] for further information regarding BCK/BCI-algebras,
and to the site “http://fs.gallup.unm.edu/neutrosophy.htm” for further information regarding
neutrosophic set theory.

3. Neutrosophic Quadruple BCK/BCI-Algebras


We consider neutrosophic quadruple numbers based on a set instead of real or complex numbers.

2
Axioms 2018, 7, 41

Definition 1. Let X be a set. A neutrosophic quadruple X-number is an ordered quadruple ( a, xT, yI, zF )
where a, x, y, z ∈ X and T, I, F have their usual neutrosophic logic meanings.

The set of all neutrosophic quadruple X-numbers is denoted by NQ( X ), that is,

NQ( X ) := {( a, xT, yI, zF ) | a, x, y, z ∈ X },

and it is called the neutrosophic quadruple set based on X. If X is a BCK/BCI-algebra, a neutrosophic


quadruple X-number is called a neutrosophic quadruple BCK/BCI-number and we say that NQ( X ) is
the neutrosophic quadruple BCK/BCI-set.
Let X be a BCK/BCI-algebra. We define a binary operation  on NQ( X ) by

( a, xT, yI, zF )  (b, uT, vI, wF ) = ( a ∗ b, ( x ∗ u) T, (y ∗ v) I, (z ∗ w) F )

for all ( a, xT, yI, zF ), (b, uT, vI, wF ) ∈ NQ( X ). Given a1 , a2 , a3 , a4 ∈ X, the neutrosophic quadruple
BCK/BCI-number ( a1 , a2 T, a3 I, a4 F ) is denoted by ã, that is,

ã = ( a1 , a2 T, a3 I, a4 F ),

and the zero neutrosophic quadruple BCK/BCI-number (0, 0T, 0I, 0F ) is denoted by 0̃, that is,

0̃ = (0, 0T, 0I, 0F ).

We define an order relation “ ” and the equality “=” on NQ( X ) as follows:

x̃ ỹ ⇔ xi ≤ yi for i = 1, 2, 3, 4
x̃ = ỹ ⇔ xi = yi for i = 1, 2, 3, 4

for all x̃, ỹ ∈ NQ( X ). It is easy to verify that “ ” is an equivalence relation on NQ( X ).

Theorem 1. If X is a BCK/BCI-algebra, then ( NQ( X ); , 0̃) is a BCK/BCI-algebra.

Proof. Let X be a BCI-algebra. For any x̃, ỹ, z̃ ∈ NQ( X ), we have

( x̃  ỹ)  ( x̃  z̃) = ( x1 ∗ y1 , ( x2 ∗ y2 ) T, ( x3 ∗ y3 ) I, ( x4 ∗ y4 ) F )
 ( x1 ∗ z1 , ( x2 ∗ z2 ) T, ( x3 ∗ z3 ) I, ( x4 ∗ z4 ) F )
= (( x1 ∗ y1 ) ∗ ( x1 ∗ z1 ), (( x2 ∗ y2 ) ∗ ( x2 ∗ z2 )) T,
(( x3 ∗ y3 ) ∗ ( x3 ∗ z3 )) I, (( x4 ∗ y4 ) ∗ ( x4 ∗ z4 )) T )
(z1 ∗ y1 , (z2 ∗ y2 ) T, (z3 ∗ y3 ) I, (z4 ∗ y4 ) F )
= z̃  ỹ

x̃  ( x̃  ỹ) = ( x1 , x2 T, x3 I, x4 F )  ( x1 ∗ y1 , ( x2 ∗ y2 ) T, ( x3 ∗ y3 ) I, ( x4 ∗ y4 ) F )
= ( x1 ∗ ( x1 ∗ y1 ), ( x2 ∗ ( x2 ∗ y2 )) T, ( x3 ∗ ( x3 ∗ y3 )) I, ( x4 ∗ ( x4 ∗ y4 )) F )
(y1 , y2 T, y3 I, y4 F )
= ỹ

x̃  x̃ = ( x1 , x2 T, x3 I, x4 F )  ( x1 , x2 T, x3 I, x4 F )
= ( x1 ∗ x1 , ( x2 ∗ x2 ) T, ( x3 ∗ x3 ) I, ( x4 ∗ x4 ) F )
= (0, 0T, 0I, 0F ) = 0̃.

3
Axioms 2018, 7, 41

Assume that x̃  ỹ = 0̃ and ỹ  x̃ = 0̃. Then

( x1 ∗ y1 , ( x2 ∗ y2 ) T, ( x3 ∗ y3 ) I, ( x4 ∗ y4 ) F ) = (0, 0T, 0I, 0F )

and

(y1 ∗ x1 , (y2 ∗ x2 ) T, (y3 ∗ x3 ) I, (y4 ∗ x4 ) F ) = (0, 0T, 0I, 0F ).

It follows that x1 ∗ y1 = 0 = y1 ∗ x1 , x2 ∗ y2 = 0 = y2 ∗ x2 , x3 ∗ y3 = 0 = y3 ∗ x3 and


x4 ∗ y4 = 0 = y4 ∗ x4 . Hence, x1 = y1 , x2 = y2 , x3 = y3 , and x4 = y4 , which implies that

x̃ = ( x1 , x2 T, x3 I, x4 F ) = (y1 , y2 T, y3 I, y4 F ) = ỹ.

Therefore, we know that ( NQ( X ); , 0̃) is a BCI-algebra. We call it the neutrosophic quadruple
BCI-algebra. Moreover, if X is a BCK-algebra, then we have

0̃  x̃ = (0 ∗ x1 , (0 ∗ x2 ) T, (0 ∗ x3 ) I, (0 ∗ x4 ) F ) = (0, 0T, 0I, 0F ) = 0̃.

Hence, ( NQ( X ); , 0̃) is a BCK-algebra. We call it the neutrosophic quadruple BCK-algebra.

Example 1. If X = {0, a}, then the neutrosophic quadruple set NQ( X ) is given as follows:

NQ( X ) = {0̃, 1̃, 2̃, 3̃, 4̃, 5̃, 6̃, 7̃, 8̃, 9̃, 10,
˜ 11,
˜ 12,
˜ 13,
˜ 14, ˜}
˜ 15

where
0̃ = (0, 0T, 0I, 0F ), 1̃ = (0, 0T, 0I, aF ), 2̃ = (0, 0T, aI, 0F ), 3̃ = (0, 0T, aI, aF ),
4̃ = (0, aT, 0I, 0F ), 5̃ = (0, aT, 0I, aF ), 6̃ = (0, aT, aI, 0F ), 7̃ = (0, aT, aI, aF ),
8̃ = ( a, 0T, 0I, 0F ), 9̃ = ( a, 0T, 0I, aF ), 10
˜ = ( a, 0T, aI, 0F ), 11
˜ = ( a, 0T, aI, aF ),
12 = ( a, aT, 0I, 0F ), 13 = ( a, aT, 0I, aF ), 14 = ( a, aT, aI, 0F ), and 15
˜ ˜ ˜ ˜ = ( a, aT, aI, aF ).

Consider a BCK-algebra X = {0, a} with the binary operation ∗, which is given in Table 1.

Table 1. Cayley table for the binary operation “∗”.

∗ 0 a
0 0 0
a a 0

Then ( NQ( X ), , 0̃) is a BCK-algebra in which the operation  is given by Table 2.

Table 2. Cayley table for the binary operation “”.

 0̃ 1̃ 2̃ 3̃ 4̃ 5̃ 6̃ 7̃ 8̃ 9̃ ˜
10 ˜
11 ˜
12 ˜
13 ˜
14 ˜
15
0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃
1̃ 1̃ 0̃ 1̃ 0̃ 1̃ 0̃ 1̃ 0̃ 1̃ 0̃ 1̃ 0̃ 1̃ 0̃ 1̃ 0̃
2̃ 2̃ 2̃ 0̃ 0̃ 2̃ 2̃ 0̃ 0̃ 2̃ 2̃ 0̃ 0̃ 2̃ 2̃ 0̃ 0̃
3̃ 3̃ 2̃ 1̃ 0̃ 3̃ 2̃ 1̃ 0̃ 3̃ 2̃ 1̃ 0̃ 3̃ 2̃ 1̃ 0̃
4̃ 4̃ 4̃ 4̃ 4̃ 0̃ 0̃ 0̃ 0̃ 4̃ 4̃ 4̃ 4̃ 0̃ 0̃ 0̃ 0̃
5̃ 5̃ 4̃ 5̃ 4̃ 1̃ 0̃ 1̃ 0̃ 5̃ 4̃ 5̃ 4̃ 1̃ 0̃ 1̃ 0̃
6̃ 6̃ 6̃ 4̃ 4̃ 2̃ 2̃ 0̃ 0̃ 6̃ 6̃ 4̃ 4̃ 2̃ 2̃ 0̃ 0̃
7̃ 7̃ 6̃ 5̃ 4̃ 3̃ 2̃ 1̃ 0̃ 7̃ 6̃ 5̃ 4̃ 3̃ 2̃ 1̃ 0̃
8̃ 8̃ 8̃ 8̃ 8̃ 8̃ 8̃ 8̃ 8̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃ 0̃
9̃ 9̃ 8̃ 8̃ 8̃ 9̃ 8̃ 9̃ 8̃ 9̃ 0̃ 1̃ 0̃ 1̃ 0̃ 1̃ 0̃
˜
10 ˜
10 ˜
10 8̃ 8̃ ˜
10 ˜
10 8̃ 8̃ 2̃ 2̃ 0̃ 2̃ 2̃ 2̃ 0̃ 0̃

4
Axioms 2018, 7, 41

Table 2. Cont.

 0̃ 1̃ 2̃ 3̃ 4̃ 5̃ 6̃ 7̃ 8̃ 9̃ ˜
10 ˜
11 ˜
12 ˜
13 ˜
14 ˜
15
˜
11 ˜
11 ˜
10 9̃ 8̃ ˜
11 ˜
10 9̃ 8̃ 3̃ 2̃ 1̃ 0̃ 3̃ 2̃ 1̃ 0̃
˜
12 ˜
12 ˜
12 ˜
12 ˜
12 8̃ 8̃ 8̃ 8̃ 4̃ 4̃ 4̃ 4̃ 0̃ 0̃ 0̃ 0̃
˜
13 ˜
13 ˜
12 ˜
13 ˜
12 9̃ 8̃ 9̃ 8̃ 5̃ 4̃ 5̃ 4̃ 1̃ 0̃ 1̃ 0̃
˜
14 ˜
14 ˜
14 ˜
12 ˜
12 ˜
10 ˜
10 8̃ 8̃ 6̃ 6̃ 4̃ 4̃ 2̃ 2̃ 0̃ 0̃
˜
15 ˜
15 ˜
14 ˜
13 ˜
12 ˜
11 ˜
10 9̃ 8̃ 7̃ 6̃ 5̃ 4̃ 3̃ 2̃ 1̃ 0̃

Theorem 2. The neutrosophic quadruple set NQ( X ) based on a positive implicative BCK-algebra X is a
positive implicative BCK-algebra.

Proof. Let X be a positive implicative BCK-algebra. Then X is a BCK-algebra, so ( NQ( X ); , 0̃) is a


BCK-algebra by Theorem 1. Let x̃, ỹ, z̃ ∈ NQ( X ). Then

( xi ∗ zi ) ∗ ( yi ∗ zi ) = ( xi ∗ yi ) ∗ zi

for all i = 1, 2, 3, 4 since xi , yi , zi ∈ X and X is a positive implicative BCK-algebra. Hence, (x̃  z̃) 
(ỹ ∗ z̃) = (x̃  ỹ)  z̃; therefore, NQ(X ) based on a positive implicative BCK-algebra X is a positive
implicative BCK-algebra.

Proposition 1. The neutrosophic quadruple set NQ( X ) based on a positive implicative BCK-algebra X satisfies
the following assertions.

(∀ x̃, ỹ, z̃ ∈ NQ( X )) ( x̃  ỹ z̃ ⇒ x̃  z̃ ỹ  z̃) (12)


(∀ x̃, ỹ ∈ NQ( X )) ( x̃  ỹ ỹ ⇒ x̃ ỹ). (13)

Proof. Let x̃, ỹ, z̃ ∈ NQ( X ). If x̃  ỹ z̃, then

0̃ = ( x̃  ỹ)  z̃ = ( x̃  z̃)  (ỹ  z̃),

so x̃  z̃ ỹ  z̃. Assume that x̃  ỹ ỹ. Using Equation (12) implies that

x̃  ỹ ỹ  ỹ = 0̃,

so x̃  ỹ = 0̃, i.e., x̃ ỹ.

Let X be a BCK/BCI-algebra. Given a, b ∈ X and subsets A and B of X, consider the sets

NQ( a, B) := {( a, aT, yI, zF ) ∈ NQ( X ) | y, z ∈ B}

NQ( A, b) := {( a, xT, bI, bF ) ∈ NQ( X ) | a, x ∈ A}

NQ( A, B) := {( a, xT, yI, zF ) ∈ NQ( X ) | a, x ∈ A; y, z ∈ B}


NQ( A∗ , B) := NQ( a, B)
a∈ A

NQ( A, B∗ ) := NQ( A, b)
b∈ B

5
Axioms 2018, 7, 41

and

NQ( A ∪ B) := NQ( A, 0) ∪ NQ(0, B).

The set NQ( A, A) is denoted by NQ( A).

Proposition 2. Let X be a BCK/BCI-algebra. Given a, b ∈ X and subsets A and B of X, we have

(1) NQ( A∗ , B) and NQ( A, B∗ ) are subsets of NQ( A, B).


(1) If 0 ∈ A ∩ B then NQ( A ∪ B) is a subset of NQ( A, B).

Proof. Straightforward.

Let X be a BCK/BCI-algebra. Given a, b ∈ X and subalgebras A and B of X, NQ( a, B) and


NQ( A, b) may not be subalgebras of NQ( X ) since

( a, aT, x3 I, x4 F )  ( a, aT, u3 I, v4 F ) = (0, 0T, ( x3 ∗ u3 ) I, ( x4 ∗ v4 ) F ) ∈


/ NQ( a, B)

and

( x1 , x2 T, bI, bF )  (u1 , u2 T, bI, bF ) = ( x1 ∗ u1 , ( x2 ∗ u2 ) T, 0I, 0F ) ∈


/ NQ( A, b)

for ( a, aT, x3 I, x4 F ) ∈ NQ( a, B), ( a, aT, u3 I, v4 F ) ∈ NQ( a, B), ( x1 , x2 T, bI, bF ) ∈ NQ( A, b),
and (u1 , u2 T, bI, bF ) ∈ NQ( A, b).

Theorem 3. If A and B are subalgebras of a BCK/BCI-algebra X, then the set NQ( A, B) is a subalgebra of
NQ( X ), which is called a neutrosophic quadruple subalgebra.

Proof. Assume that A and B are subalgebras of a BCK/BCI-algebra X. Let x̃ = ( x1 , x2 T, x3 I, x4 F )


and ỹ = (y1 , y2 T, y3 I, y4 F ) be elements of NQ( A, B). Then x1 , x2 , y1 , y2 ∈ A and x3 , x4 , y3 , y4 ∈ B,
which implies that x1 ∗ y1 ∈ A, x2 ∗ y2 ∈ A, x3 ∗ y3 ∈ B, and x4 ∗ y4 ∈ B. Hence,

x̃  ỹ = ( x1 ∗ y1 , ( x2 ∗ y2 ) T, ( x3 ∗ y3 ) I, ( x4 ∗ y4 ) F ) ∈ NQ( A, B),

so NQ( A, B) is a subalgebra of NQ( X ).

Theorem 4. If A and B are ideals of a BCK/BCI-algebra X, then the set NQ( A, B) is an ideal of NQ( X ),
which is called a neutrosophic quadruple ideal.

Proof. Assume that A and B are ideals of a BCK/BCI-algebra X. Obviously, 0̃ ∈ NQ( A, B).
Let x̃ = ( x1 , x2 T, x3 I, x4 F ) and ỹ = (y1 , y2 T, y3 I, y4 F ) be elements of NQ( X ) such that
x̃  ỹ ∈ NQ( A, B) and ỹ ∈ NQ( A, B). Then

x̃  ỹ = ( x1 ∗ y1 , ( x2 ∗ y2 ) T, ( x3 ∗ y3 ) I, ( x4 ∗ y4 ) F ) ∈ NQ( A, B),

so x1 ∗ y1 ∈ A, x2 ∗ y2 ∈ A, x3 ∗ y3 ∈ B and x4 ∗ y4 ∈ B. Since ỹ ∈ NQ( A, B), we have


y1 , y2 ∈ A and y3 , y4 ∈ B. Since A and B are ideals of X, it follows that x1 , x2 ∈ A and x3 , x4 ∈ B.
Hence, x̃ = ( x1 , x2 T, x3 I, x4 F ) ∈ NQ( A, B), so NQ( A, B) is an ideal of NQ( X ).

Since every ideal is a subalgebra in a BCK-algebra, we have the following corollary.

Corollary 1. If A and B are ideals of a BCK-algebra X, then the set NQ( A, B) is a subalgebra of NQ( X ).

The following example shows that Corollary 1 is not true in a BCI-algebra.

6
Axioms 2018, 7, 41

Example 2. Consider a BCI-algebra (Z, −, 0). If we take A = N and B = Z, then NQ( A, B) is an ideal of
NQ(Z). However, it is not a subalgebra of NQ(Z) since

(2, 3T, −5I, 6F )  (3, 5T, 6I, −7F ) = (−1, −2T, −11I, 13F ) ∈
/ NQ( A, B)

for (2, 3T, −5I, 6F ), (3, 5T, 6I, −7F ) ∈ NQ( A, B).

Theorem 5. If A and B are closed ideals of a BCI-algebra X, then the set NQ( A, B) is a closed ideal of NQ( X ).

Proof. If A and B are closed ideals of a BCI-algebra X, then the set NQ( A, B) is an ideal of NQ( X ) by
Theorem 4. Let x̃ = ( x1 , x2 T, x3 I, x4 F ) ∈ NQ( A, B). Then

0̃  x̃ = (0 ∗ x1 , (0 ∗ x2 ) T, (0 ∗ x3 ) I, (0 ∗ x4 ) F ) ∈ NQ( A, B)

since 0 ∗ x1 , 0 ∗ x2 ∈ A and 0 ∗ x3 , 0 ∗ x4 ∈ B. Therefore, NQ( A, B) is a closed ideal of NQ( X ).

Since every closed ideal of a BCI-algebra X is a subalgebra of X, we have the following corollary.

Corollary 2. If A and B are closed ideals of a BCI-algebra X, then the set NQ( A, B) is a subalgebra of NQ( X ).

In the following example, we know that there exist ideals A and B in a BCI-algebra X such that
NQ( A, B) is not a closed ideal of NQ( X ).

Example 3. Consider BCI-algebras (Y, ∗, 0) and (Z, −, 0). Then X = Y × Z is a BCI-algebra (see [21]).
Let A = Y × N and B = {0} × N. Then A and B are ideals of X, so NQ( A, B) is an ideal of NQ( X ) by
Theorem 4. Let ((0, 0), (0, 1) T, (0, 2) I, (0, 3) F ) ∈ NQ( A, B). Then

((0, 0), (0, 0) T, (0, 0) I, (0, 0) F )  ((0, 0), (0, 1) T, (0, 2) I, (0, 3) F )
= ((0, 0), (0, −1) T, (0, −2) I, (0, −3) F ) ∈
/ NQ( A, B).

Hence, NQ( A, B) is not a closed ideal of NQ( X ).

We provide conditions wherethe set NQ( A, B) is a closed ideal of NQ( X ).

Theorem 6. Let A and B be ideals of a BCI-algebra X and let

Γ := { ã ∈ NQ( X ) | (∀ x̃ ∈ NQ( X ))( x̃ ã ⇒ x̃ = ã)}.

Assume that, if Γ ⊆ NQ( A, B), then |Γ| < ∞. Then NQ( A, B) is a closed ideal of NQ( X ).

Proof. If A and B are ideals of X, then NQ( A, B) is an ideal of NQ( X ) by Theorem 4.


Let ã = ( a1 , a2 T, a3 I, a4 F ) ∈ NQ( A, B). For any n ∈ N, denote n(ã) := 0̃  (0̃  ã)n . Then n(ã) ∈ Γ and

n( ã) = (0 ∗ (0 ∗ a1 )n , (0 ∗ (0 ∗ a2 )n ) T, (0 ∗ (0 ∗ a3 )n ) I, (0 ∗ (0 ∗ a4 )n ) F )
= (0 ∗ (0 ∗ a1n ), (0 ∗ (0 ∗ a2n )) T, (0 ∗ (0 ∗ a3n )) I, (0 ∗ (0 ∗ a4n )) F )
= 0̃  (0̃  ãn ).

Hence,

n( ã)  ãn = (0̃  (0̃  ãn ))  ãn


= (0̃  ãn )  (0̃  ãn )
= 0̃ ∈ NQ( A, B),

7
Axioms 2018, 7, 41

so n( ã) ∈ NQ( A, B), since ã ∈ NQ( A, B), and NQ( A, B) is an ideal of NQ( X ). Since |Γ| < ∞,
it follows that k ∈ N such that n( ã) = (n + k )( ã), that is, n( ã) = n( ã)  (0̃  ã)k , and thus

k ( ã) = 0̃  (0̃  ã)k


= (n( ã)  (0̃  ã)k )  n( ã)
= n( ã)  n( ã) = 0̃,

i.e., (k − 1)( ã)  (0̃  ã) = 0̃. Since 0̃  ã ∈ Γ, it follows that 0̃  ã = (k − 1)( ã) ∈ NQ( A, B).
Therefore, NQ( A, B) is a closed ideal of NQ( X ).

Theorem 7. Given two elements a and b in a BCI-algebra X, let

A a := { x ∈ X | a ∗ x = a} and Bb := { x ∈ X | b ∗ x = b}. (14)

Then NQ( A a , Bb ) is a closed ideal of NQ( X ).

Proof. Since a ∗ 0 = a and b ∗ 0 = b, we have 0 ∈ A a ∩ Bb . Thus, 0̃ ∈ NQ( A a , Bb ). If x ∈ A a and


y ∈ Bb , then

0 ∗ x = ( a ∗ x ) ∗ a = a ∗ a = 0 and 0 ∗ y = (b ∗ y) ∗ b = b ∗ b = 0. (15)

Let x, y, c, d ∈ X be such that x, y ∗ x ∈ A a and c, d ∗ c ∈ Bb . Then

( a ∗ y ) ∗ a = 0 ∗ y = (0 ∗ y ) ∗ 0 = (0 ∗ y ) ∗ (0 ∗ x ) = 0 ∗ ( y ∗ x ) = 0

and

(b ∗ d) ∗ b = 0 ∗ d = (0 ∗ d) ∗ 0 = (0 ∗ d) ∗ (0 ∗ c) = 0 ∗ (d ∗ c) = 0,

that is, a ∗ y ≤ a and b ∗ d ≤ b. On the other hand,

a = a ∗ (y ∗ x ) = ( a ∗ x ) ∗ (y ∗ x ) ≤ a ∗ y

and

b = b ∗ (d ∗ c) = (b ∗ c) ∗ (d ∗ c) ≤ b ∗ d.

Thus, a ∗ y = a and b ∗ d = b, i.e., y ∈ A a and d ∈ Bb . Hence, A a and Bb are ideals of X, and


NQ( A a , Bb ) is therefore an ideal of NQ( X ) by Theorem 4. Let x̃ = ( x1 , x2 T, x3 I, x4 F ) ∈ NQ( A a , Bb ).
Then x1 , x2 ∈ A a , and x3 , x4 ∈ Bb . It follows from Equation (15) that 0 ∗ x1 = 0 ∈ A a , 0 ∗ x2 = 0 ∈ A a ,
0 ∗ x3 = 0 ∈ Bb , and 0 ∗ x4 = 0 ∈ Bb . Hence,

0̃  x̃ = (0 ∗ x1 , (0 ∗ x2 ) T, (0 ∗ x3 ) I, (0 ∗ x4 ) F ) ∈ NQ( A a , Bb ).

Therefore, NQ( A a , Bb ) is a closed ideal of NQ( X ).

Proposition 3. Let A and B be ideals of a BCK-algebra X. Then

NQ( A) ∩ NQ( B) = {0̃} ⇔ (∀ x̃ ∈ NQ( A))(∀ỹ ∈ NQ( B))( x̃  ỹ = x̃ ). (16)

Proof. Note that NQ( A) and NQ( B) are ideals of NQ( X ). Assume that NQ( A) ∩ NQ( B) = {0̃}. Let
x̃ = ( x1 , x2 T, x3 I, x4 F ) ∈ NQ( A) and ỹ = (y1 , y2 T, y3 I, y4 F ) ∈ NQ( B).

8
Axioms 2018, 7, 41

Since x̃  ( x̃  ỹ) x̃ and x̃  ( x̃  ỹ) ỹ, it follows that x̃  ( x̃  ỹ) ∈ NQ( A) ∩ NQ( B) = {0̃}.
Obviously, ( x̃  ỹ)  x̃ ∈ {0̃}. Hence, x̃  ỹ = x̃.
Conversely, suppose that x̃  ỹ = x̃ for all x̃ ∈ NQ( A) and ỹ ∈ NQ(B). If z̃ ∈ NQ( A) ∩ NQ(B),
then z̃ ∈ NQ( A) and z̃ ∈ NQ( B), which is implied from the hypothesis that z̃ = z̃  z̃ = 0̃.
Hence NQ( A) ∩ NQ( B) = {0̃}.

Theorem 8. Let A and B be subsets of a BCK-algebra X such that

(∀ a, b ∈ A ∩ B)(K ( a, b) ⊆ A ∩ B) (17)

where K ( a, b) := { x ∈ X | x ∗ a ≤ b}. Then the set NQ( A, B) is an ideal of NQ( X ).

Proof. If x ∈ A ∩ B, then 0 ∈ K ( x, x ) since 0 ∗ x ≤ x. Hence, 0 ∈ A ∩ B by Equation (17), so it is clear


that 0̃ ∈ NQ( A, B). Let x̃ = ( x1 , x2 T, x3 I, x4 F ) and ỹ = (y1 , y2 T, y3 I, y4 F ) be elements of NQ( X ) such
that x̃  ỹ ∈ NQ( A, B) and ỹ ∈ NQ( A, B). Then

x̃  ỹ = ( x1 ∗ y1 , ( x2 ∗ y2 ) T, ( x3 ∗ y3 ) I, ( x4 ∗ y4 ) F ) ∈ NQ( A, B),

so x1 ∗ y1 ∈ A, x2 ∗ y2 ∈ A, x3 ∗ y3 ∈ B, and x4 ∗ y4 ∈ B. Using (II), we have x1 ∈ K ( x1 ∗ y1 , y1 ) ⊆ A,


x2 ∈ K ( x2 ∗ y2 , y2 ) ⊆ A, x3 ∈ K ( x3 ∗ y3 , y3 ) ⊆ B, and x4 ∈ K ( x4 ∗ y4 , y4 ) ⊆ B. This implies that
x̃ = ( x1 , x2 T, x3 I, x4 F ) ∈ NQ( A, B). Therefore, NQ( A, B) is an ideal of NQ( X ).

Corollary 3. Let A and B be subsets of a BCK-algebra X such that

(∀ a, x, y ∈ X )( x, y ∈ A ∩ B, ( a ∗ x ) ∗ y = 0 ⇒ a ∈ A ∩ B). (18)

Then the set NQ( A, B) is an ideal of NQ( X ).

Theorem 9. Let A and B be nonempty subsets of a BCK-algebra X such that

(∀ a, x, y ∈ X )( x, y ∈ A (or B), a ∗ x ≤ y ⇒ a ∈ A (or B)). (19)

Then the set NQ( A, B) is an ideal of NQ( X ).

Proof. Assume that the condition expressed by Equation (19) is valid for nonempty subsets A and B
of X. Since 0 ∗ x ≤ x for any x ∈ A (or B), we have 0 ∈ A (or B) by Equation (19). Hence, it is clear
that 0̃ ∈ NQ( A, B). Let x̃ = (x1 , x2 T, x3 I, x4 F) and ỹ = (y1 , y2 T, y3 I, y4 F) be elements of NQ(X) such
that x̃  ỹ ∈ NQ( A, B) and ỹ ∈ NQ( A, B). Then

x̃  ỹ = ( x1 ∗ y1 , ( x2 ∗ y2 ) T, ( x3 ∗ y3 ) I, ( x4 ∗ y4 ) F ) ∈ NQ( A, B),

so x1 ∗ y1 ∈ A, x2 ∗ y2 ∈ A, x3 ∗ y3 ∈ B, and x4 ∗ y4 ∈ B. Note that xi ∗ ( xi ∗ yi ) ≤ yi for i = 1, 2, 3, 4.


It follows from Equation (19) that x1 , x2 ∈ A and x3 , x4 ∈ B. Hence,

x̃ = ( x1 , x2 T, x3 I, x4 F ) ∈ NQ( A, B);

therefore, NQ( A, B) is an ideal of NQ( X ).

Theorem 10. If A and B are positive implicative ideals of a BCK-algebra X, then the set NQ( A, B) is a positive
implicative ideal of NQ( X ), which is called a positive implicative neutrosophic quadruple ideal.

9
Axioms 2018, 7, 41

Proof. Assume that A and B are positive implicative ideals of a BCK-algebra X. Obviously, 0̃ ∈ NQ( A, B).
Let x̃ = (x1 , x2 T, x3 I, x4 F), ỹ = (y1 , y2 T, y3 I, y4 F), and z̃ = (z1 , z2 T, z3 I, z4 F ) be elements of NQ( X )
such that ( x̃  ỹ)  z̃ ∈ NQ( A, B) and ỹ  z̃ ∈ NQ( A, B). Then

( x̃  ỹ)  z̃ = (( x1 ∗ y1 ) ∗ z1 , (( x2 ∗ y2 ) ∗ z2 ) T,
(( x3 ∗ y3 ) ∗ z3 ) I, (( x4 ∗ y4 ) ∗ z4 ) F ) ∈ NQ( A, B),

and

ỹ  z̃ = (y1 ∗ z1 , (y2 ∗ z2 ) T, (y3 ∗ z3 ) I, (y4 ∗ z4 ) F ) ∈ NQ( A, B),

so ( x1 ∗ y1 ) ∗ z1 ∈ A, ( x2 ∗ y2 ) ∗ z2 ∈ A, ( x3 ∗ y3 ) ∗ z3 ∈ B, ( x4 ∗ y4 ) ∗ z4 ∈ B, y1 ∗ z1 ∈ A, y2 ∗ z2 ∈ A,
y3 ∗ z3 ∈ B, and y4 ∗ z4 ∈ B. Since A and B are positive implicative ideals of X, it follows that
x1 ∗ z1 , x2 ∗ z2 ∈ A and x3 ∗ z3 , x4 ∗ z4 ∈ B. Hence,

x̃  z̃ = ( x1 ∗ z1 , ( x2 ∗ z2 ) T, ( x3 ∗ z3 ) I, ( x4 ∗ z4 ) F ) ∈ NQ( A, B),

so NQ( A, B) is a positive implicative ideal of NQ( X ).

Theorem 11. Let A and B be ideals of a BCK-algebra X such that

(∀ x, y, z ∈ X )(( x ∗ y) ∗ z ∈ A (or B) ⇒ ( x ∗ z) ∗ (y ∗ z) ∈ A (or B)). (20)

Then NQ( A, B) is a positive implicative ideal of NQ( X ).

Proof. Since A and B are ideals of X, it follows from Theorem 4 that NQ( A, B) is an ideal of NQ( X ).
Let x̃ = ( x1 , x2 T, x3 I, x4 F ), ỹ = (y1 , y2 T, y3 I, y4 F ), and z̃ = (z1 , z2 T, z3 I, z4 F ) be elements of NQ( X )
such that ( x̃  ỹ)  z̃ ∈ NQ( A, B) and ỹ  z̃ ∈ NQ( A, B). Then

( x̃  ỹ)  z̃ = (( x1 ∗ y1 ) ∗ z1 , (( x2 ∗ y2 ) ∗ z2 ) T,
(( x3 ∗ y3 ) ∗ z3 ) I, (( x4 ∗ y4 ) ∗ z4 ) F ) ∈ NQ( A, B),

and

ỹ  z̃ = (y1 ∗ z1 , (y2 ∗ z2 ) T, (y3 ∗ z3 ) I, (y4 ∗ z4 ) F ) ∈ NQ( A, B),

so ( x1 ∗ y1 ) ∗ z1 ∈ A, ( x2 ∗ y2 ) ∗ z2 ∈ A, ( x3 ∗ y3 ) ∗ z3 ∈ B, ( x4 ∗ y4 ) ∗ z4 ∈ B, y1 ∗ z1 ∈ A, y2 ∗ z2 ∈ A,
y3 ∗ z3 ∈ B, and y4 ∗ z4 ∈ B. It follows from Equation (20) that ( x1 ∗ z1 ) ∗ (y1 ∗ z1 ) ∈ A, ( x2 ∗ z2 ) ∗ (y2 ∗
z2 ) ∈ A, ( x3 ∗ z3 ) ∗ (y3 ∗ z3 ) ∈ B, and ( x4 ∗ z4 ) ∗ (y4 ∗ z4 ) ∈ B. Since A and B are ideals of X, we get
x1 ∗ z1 ∈ A, x2 ∗ z2 ∈ A, x3 ∗ z3 ∈ B, and x4 ∗ z4 ∈ B. Hence,

x̃  z̃ = ( x1 ∗ z1 , ( x2 ∗ z2 ) T, ( x3 ∗ z3 ) I, ( x4 ∗ z4 ) F ) ∈ NQ( A, B).

Therefore, NQ( A, B) is a positive implicative ideal of NQ( X ).

Corollary 4. Let A and B be ideals of a BCK-algebra X such that

(∀ x, y ∈ X )(( x ∗ y) ∗ y ∈ A (or B) ⇒ x ∗ y ∈ A (or B)). (21)

Then NQ( A, B) is a positive implicative ideal of NQ( X ).

Proof. If the condition expressed in Equation (21) is valid, then the condition expressed in Equation (20)
is true. Hence, NQ( A, B) is a positive implicative ideal of NQ( X ) by Theorem 11.

10
Axioms 2018, 7, 41

Theorem 12. Let A and B be subsets of a BCK-algebra X such that 0 ∈ A ∩ B and

(( x ∗ y) ∗ y) ∗ z ∈ A (or B), z ∈ A (or B) ⇒ x ∗ y ∈ A (or B) (22)

for all x, y, z ∈ X. Then NQ( A, B) is a positive implicative ideal of NQ( X ).

Proof. Since 0 ∈ A ∩ B, it is clear that 0̃ ∈ NQ( A, B). We first show that

(∀ x, y ∈ X )(x ∗ y ∈ A (or B), y ∈ A (or B) ⇒ x ∈ A (or B)). (23)

Let x, y ∈ X be such that x ∗ y ∈ A (or B) and y ∈ A (or B). Then

(( x ∗ 0) ∗ 0) ∗ y = x ∗ y ∈ A (or B)

by Equation (1), which, based on Equations (1) and (22), implies that x = x ∗ 0 ∈ A (or B).
Let x̃ = ( x1 , x2 T, x3 I, x4 F ), ỹ = (y1 , y2 T, y3 I, y4 F), and z̃ = (z1 , z2 T, z3 I, z4 F) be elements of NQ(X)
such that (x̃  ỹ)  z̃ ∈ NQ( A, B) and ỹ  z̃ ∈ NQ( A, B). Then

( x̃  ỹ)  z̃ = (( x1 ∗ y1 ) ∗ z1 , (( x2 ∗ y2 ) ∗ z2 ) T,
(( x3 ∗ y3 ) ∗ z3 ) I, (( x4 ∗ y4 ) ∗ z4 ) F ) ∈ NQ( A, B),

and

ỹ  z̃ = (y1 ∗ z1 , (y2 ∗ z2 ) T, (y3 ∗ z3 ) I, (y4 ∗ z4 ) F ) ∈ NQ( A, B),

so ( x1 ∗ y1 ) ∗ z1 ∈ A, ( x2 ∗ y2 ) ∗ z2 ∈ A, ( x3 ∗ y3 ) ∗ z3 ∈ B, ( x4 ∗ y4 ) ∗ z4 ∈ B, y1 ∗ z1 ∈ A, y2 ∗ z2 ∈ A,
y3 ∗ z3 ∈ B, and y4 ∗ z4 ∈ B. Note that

((( xi ∗ zi ) ∗ zi ) ∗ (yi ∗ zi )) ∗ (( xi ∗ yi ) ∗ zi ) = 0 ∈ A (or B)

for i = 1, 2, 3, 4. Since ( xi ∗ yi ) ∗ zi ∈ A for i = 1, 2 and ( x j ∗ y j ) ∗ z j ∈ B for j = 3, 4, it follows from


Equation (23) that (( xi ∗ zi ) ∗ zi ) ∗ (yi ∗ zi ) ∈ A for i = 1, 2, and (( x j ∗ z j ) ∗ z j ) ∗ (y j ∗ z j ) ∈ B for j = 3, 4.
Moreover, since yi ∗ zi ∈ A for i = 1, 2, and y j ∗ z j ∈ B for j = 3, 4, we have x1 ∗ z1 ∈ A, x2 ∗ z2 ∈ A,
x3 ∗ z3 ∈ B, and x4 ∗ z4 ∈ B by Equation (22). Hence,

x̃  z̃ = ( x1 ∗ z1 , ( x2 ∗ z2 ) T, ( x3 ∗ z3 ) I, ( x4 ∗ z4 ) F ) ∈ NQ( A, B).

Therefore, NQ( A, B) is a positive implicative ideal of NQ( X ).

Theorem 13. Let A and B be subsets of a BCK-algebra X such that NQ( A, B) is a positive implicative ideal of
NQ( X ). Then the set

Ω ã := { x̃ ∈ NQ( X ) | x̃  ã ∈ NQ( A, B)} (24)

is an ideal of NQ( X ) for any ã ∈ NQ( X ).

Proof. Obviously, 0̃ ∈ Ω ã . Let x̃, ỹ ∈ NQ( X ) be such that x̃  ỹ ∈ Ω ã and ỹ ∈ Ω ã . Then


( x̃  ỹ)  ã ∈ NQ( A, B) and ỹ  ã ∈ NQ( A, B). Since NQ( A, B) is a positive implicative ideal of
NQ( X ), it follows from Equation (11) that x̃  ã ∈ NQ( A, B) and therefore that x̃ ∈ Ω ã . Hence, Ω ã is
an ideal of NQ( X ).

Combining Theorems 12 and 13, we have the following corollary.

11
Axioms 2018, 7, 41

Corollary 5. If A and B are subsets of a BCK-algebra X satisfying 0 ∈ A ∩ B and the condition expressed in
Equation (22), then the set Ω ã in Equation (24) is an ideal of NQ( X ) for all ã ∈ NQ( X ).

Theorem 14. For any subsets A and B of a BCK-algebra X, if the set Ω ã in Equation (24) is an ideal of NQ( X )
for all ã ∈ NQ( X ), then NQ( A, B) is a positive implicative ideal of NQ( X ).

Proof. Since 0̃ ∈ Ω ã , we have 0̃ = 0̃  ã ∈ NQ( A, B). Let x̃, ỹ, z̃ ∈ NQ( X ) be such that
( x̃  ỹ)  z̃ ∈ NQ( A, B) and ỹ  z̃ ∈ NQ( A, B). Then x̃  ỹ ∈ Ωz̃ and ỹ ∈ Ωz̃ . Since Ωz̃ is an
ideal of NQ( X ), it follows that x̃ ∈ Ωz̃ . Hence, x̃  z̃ ∈ NQ( A, B). Therefore, NQ( A, B) is a positive
implicative ideal of NQ( X ).

Theorem 15. For any ideals A and B of a BCK-algebra X and for any ã ∈ NQ( X ), if the set Ω ã in
Equation (24) is an ideal of NQ( X ), then NQ( X ) is a positive implicative BCK-algebra.

Proof. Let Ω be any ideal of NQ( X ). For any x̃, ỹ, z̃ ∈ NQ( X ), assume that ( x̃  ỹ)  z̃ ∈ Ω and
ỹ  z̃ ∈ Ω. Then x̃  ỹ ∈ Ωz̃ and ỹ ∈ Ωz̃ . Since Ωz̃ is an ideal of NQ( X ), it follows that x̃ ∈ Ωz̃ .
Hence, x̃  z̃ ∈ Ω, which shows that Ω is a positive implicative ideal of NQ( X ). Therefore, NQ( X ) is
a positive implicative BCK-algebra.

In general, the set {0̃} is an ideal of any neutrosophic quadruple BCK-algebra NQ( X ), but it is
not a positive implicative ideal of NQ( X ) as seen in the following example.

Example 4. Consider a BCK-algebra X = {0, 1, 2} with the binary operation ∗, which is given in Table 3.

Table 3. Cayley table for the binary operation “∗”.

∗ 0 1 2
0 0 0 0
1 1 0 0
2 2 1 0

Then the neutrosophic quadruple BCK-algebra NQ( X ) has 81 elements. If we take ã = (2, 2T, 2I, 2F )
and b̃ = (1, 1T, 1I, 1F ) in NQ( X ), then

( ã  b̃)  b̃ = ((2 ∗ 1) ∗ 1, ((2 ∗ 1) ∗ 1) T, ((2 ∗ 1) ∗ 1) I, ((2 ∗ 1) ∗ 1) F )


= (1 ∗ 1, (1 ∗ 1) T, (1 ∗ 1) I, (1 ∗ 1) F ) = (0, 0T, 0I, 0F ) = 0̃,

and b̃  b̃ = 0̃. However,

ã  b̃ = (2 ∗ 1, (2 ∗ 1) T, (2 ∗ 1) I, (2 ∗ 1) F ) = (1, 1T, 1I, 1F ) = 0̃.

Hence, {0̃} is not a positive implicative ideal of NQ( X ).

We now provide conditions for the set {0̃} to be a positive implicative ideal in the neutrosophic
quadruple BCK-algebra.

Theorem 16. Let NQ( X ) be a neutrosophic quadruple BCK-algebra. If the set

Ω( ã) := { x̃ ∈ NQ( X ) | x̃ ã} (25)

is an ideal of NQ( X ) for all ã ∈ NQ( X ), then {0̃} is a positive implicative ideal of NQ( X ).

12
Axioms 2018, 7, 41

Proof. We first show that

(∀ x̃, ỹ ∈ NQ( X ))(( x̃  ỹ)  ỹ = 0̃ ⇒ x̃  ỹ = 0̃). (26)

Assume that ( x̃  ỹ)  ỹ = 0̃ for all x̃, ỹ ∈ NQ( X ). Then x̃  ỹ ỹ, so x̃  ỹ ∈ Ω(ỹ). Since
ỹ ∈ Ω(ỹ) and Ω(ỹ) is an ideal of NQ( X ), we have x̃ ∈ Ω(ỹ). Thus, x̃ ỹ, that is, x̃  ỹ = 0̃.
Let ũ := ( x̃  ỹ)  ỹ. Then

(( x̃  ũ)  ỹ)  ỹ = (( x̃  ỹ)  ỹ)  ũ = 0̃,

which implies, based on Equations (3) and (26), that

( x̃  ỹ)  (( x̃  ỹ)  ỹ) = ( x̃  ỹ)  ũ = ( x̃  ũ)  ỹ = 0̃,

that is, x̃  ỹ ( x̃  ỹ)  ỹ. Since ( x̃  ỹ)  ỹ x̃  ỹ, it follows that

( x̃  ỹ)  ỹ = x̃  ỹ. (27)

If we put ỹ = x̃  (ỹ  (ỹ  x̃ )) in Equation (27), then

x̃  ( x̃  (ỹ  (ỹ  x̃ ))) = ( x̃  ( x̃  (ỹ  (ỹ  x̃ ))))  ( x̃  (ỹ  (ỹ  x̃ )))


(ỹ  (ỹ  x̃ ))  ( x̃  (ỹ  (ỹ  x̃ )))
(ỹ  (ỹ  x̃ ))  ( x̃  ỹ)
= (ỹ  ( x̃  ỹ))  (ỹ  x̃ )
= ((ỹ  ( x̃  ỹ))  (ỹ  x̃ ))  (ỹ  x̃ )
( x̃  ( x̃  ỹ))  (ỹ  x̃ ).

On the other hand,

(( x̃  ( x̃  ỹ))  (ỹ  x̃ ))  ( x̃  ( x̃  (ỹ  (ỹ  x̃ ))))


= (( x̃  ( x̃  ( x̃  (ỹ  (ỹ  x̃ )))))  ( x̃  ỹ))  (ỹ  x̃ ))
= (( x̃  (ỹ  (ỹ  x̃ )))  ( x̃  ỹ))  (ỹ  x̃ ))
(ỹ  (ỹ  (ỹ  x̃ )))  (ỹ  x̃ )) = 0̃,

so (( x̃  ( x̃  ỹ))  (ỹ  x̃ ))  ( x̃  ( x̃  (ỹ  (ỹ  x̃ )))) = 0̃, that is,

(( x̃  ( x̃  ỹ))  (ỹ  x̃ )) x̃  ( x̃  (ỹ  (ỹ  x̃ ))).

Hence,

x̃  ( x̃  (ỹ  (ỹ  x̃ ))) = (( x̃  ( x̃  ỹ))  (ỹ  x̃ )). (28)

If we use ỹ  x̃ instead of x̃ in Equation (28), then

ỹ  x̃ = (ỹ  x̃ )  0̃
= (ỹ  x̃ )  ((ỹ  x̃ )  (ỹ  (ỹ  (ỹ  x̃ ))))
= ((ỹ  x̃ )  ((ỹ  x̃ )  ỹ))  (ỹ  (ỹ  x̃ ))
= (ỹ  x̃ )  (ỹ  (ỹ  x̃ )),

13
Axioms 2018, 7, 41

which, by taking x̃ = ỹ  x̃, implies that

ỹ  (ỹ  x̃ ) = (ỹ  (ỹ  x̃ ))  (ỹ  (ỹ  (ỹ  x̃ )))


= (ỹ  (ỹ  x̃ ))  (ỹ  x̃ ).

It follows that

(ỹ  (ỹ  x̃ ))  ( x̃  ỹ) = ((ỹ  (ỹ  x̃ ))  (ỹ  x̃ ))  ( x̃  ỹ)


( x̃  (ỹ  x̃ ))  ( x̃  ỹ)
= ( x̃  ( x̃  ỹ))  (ỹ  x̃ ),

so,

ỹ  x̃ = (ỹ  (ỹ  (ỹ  x̃ )))  0̃


= (ỹ  (ỹ  (ỹ  x̃ )))  ((ỹ  x̃ )  ỹ)
((ỹ  x̃ )  ((ỹ  x̃ )  ỹ))  (ỹ  (ỹ  x̃ ))
= (ỹ  x̃ )  (ỹ  (ỹ  x̃ ))
(ỹ  x̃ )  x̃.

Since (ỹ  x̃ )  x̃ ỹ  x̃, it follows that

(ỹ  x̃ )  x̃ = ỹ  x̃. (29)

Based on Equation (29), it follows that

(( x̃  z̃) ∗ (ỹ  z̃))  (( x̃  ỹ)  z̃)


= ((( x̃  z̃)  z̃)  (ỹ  z̃))  (( x̃  ỹ)  z̃)
(( x̃  z̃)  ỹ)  (( x̃  ỹ)  z̃)
= 0̃,

that is, ( x̃  z̃) ∗ (ỹ  z̃) ( x̃  ỹ)  z̃. Note that

(( x̃  ỹ)  z̃)  (( x  z̃)  (ỹ  z̃))


= (( x̃  ỹ)  z̃)  (( x  (ỹ  z̃))  z̃)
( x̃  ỹ)  ( x̃  (ỹ  z̃))
(ỹ  z̃)  ỹ = 0̃,

which shows that ( x̃  ỹ)  z̃ ( x̃  z̃)  (ỹ  z̃). Hence, ( x̃  ỹ)  z̃ = ( x̃  z̃)  (ỹ  z̃).
Therefore, NQ( X ) is a positive implicative, so {0̃} is a positive implicative ideal of NQ( X ).

4. Conclusions
We have considered a neutrosophic quadruple BCK/BCI-number on a set and established
neutrosophic quadruple BCK/BCI-algebras, which consist of neutrosophic quadruple BCK/BCI-numbers.
We have investigated several properties and considered ideal theory in a neutrosophic quadruple
BCK-algebra and a closed ideal in a neutrosophic quadruple BCI-algebra. Using subsets A and B
of a neutrosophic quadruple BCK/BCI-algebra, we have considered sets NQ( A, B), which consist of
neutrosophic quadruple BCK/BCI-numbers with a condition. We have provided conditions for the
set NQ( A, B) to be a (positive implicative) ideal of a neutrosophic quadruple BCK-algebra, and the set
NQ( A, B) to be a (closed) ideal of a neutrosophic quadruple BCI-algebra. We have provided an example

14
Axioms 2018, 7, 41

to show that the set {0̃} is not a positive implicative ideal in a neutrosophic quadruple BCK-algebra,
and we have considered conditions for the set {0̃} to be a positive implicative ideal in a neutrosophic
quadruple BCK-algebra.

Author Contributions: Y.B.J. and S.-Z.S. initiated the main idea of this work and wrote the paper. F.S. and H.B.
provided examples and checked the content. All authors conceived and designed the new definitions and results,
and have read and approved the final manuscript for submission.
Acknowledgments: The authors wish to thank the anonymous reviewers for their valuable suggestions.
The second author, Seok-Zun Song, was supported under the framework of international cooperation program
managed by the National Research Foundation of Korea (2017K2A9A1A01092970).
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Smarandache, F. Neutrosophy, Neutrosophic Probability, Set, and Logic, ProQuest Information & Learning,
Ann Arbor, Michigan, USA, p. 105, 1998. Available online: http://fs.gallup.unm.edu/eBook-neutrosophics6.pdf
(accessed on 1 September 2007).
2. Smarandache, F. A Unifying Field in Logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set, Neutrosophic
Probability; American Reserch Press: Rehoboth, NM, USA, 1999.
3. Smarandache, F. Neutrosophic set—A generalization of the intuitionistic fuzzy set. Int. J. Pure Appl. Math.
2005, 24, 287–297.
4. Garg, H. Linguistic single-valued neutrosophic prioritized aggregation operators and their applications to
multiple-attribute group decision-making. J. Ambient Intell. Humaniz. Comput. 2018, in press. [CrossRef]
5. Garg, H. Non-linear programming method for multi-criteria decision making problems under interval
neutrosophic set environment. Appl. Intell. 2017, in press. [CrossRef]
6. Garg, H. Some New Biparametric Distance Measures on Single-Valued Neutrosophic Sets with Applications
to Pattern Recognition and Medical Diagnosis. Information 2017, 8, 162. [CrossRef]
7. Garg, H. Novel single-valued neutrosophic aggregated operators under Frank norm operation and its
application to decision-making process. Int. J. Uncertain. Quantif. 2016, 6, 361–375.
8. Garg, H.; Garg, N. On single-valued neutrosophic entropy of order α. Neutrosophic Sets Syst. 2016, 14, 21–28.
9. Saeid, A.B.; Jun, Y.B. Neutrosophic subalgebras of BCK/BCI-algebras based on neutrosophic points.
Ann. Fuzzy Math. Inform. 2017, 14, 87–97.
10. Jun, Y.B. Neutrosophic subalgebras of several types in BCK/BCI-algebras. Ann. Fuzzy Math. Inform. 2017, 14,
75–86.
11. Jun, Y.B.; Kim, S.J.; Smarandache, F. Interval neutrosophic sets with applications in BCK/BCI-algebra.
Axioms 2018, 7, 23. [CrossRef]
12. Jun, Y.B.; Smarandache, F.; Bordbar, H. Neutrosophic N -structures applied to BCK/BCI-algebras. Information
2017, 8, 128. [CrossRef]
13. Jun, Y.B.; Smarandache, F.; Song, S.Z.; Khan, M. Neutrosophic positive implicative N -ideals in BCK/BCI-algebras.
Axioms 2018, 7, 3. [CrossRef]
14. Khan, M.; Anis, S.; Smarandache, F.; Jun, Y.B. Neutrosophic N -structures and their applications in
semigroups. Ann. Fuzzy Math. Inform. 2017, 14, 583–598.
15. Öztürk, M.A.; Jun, Y.B. Neutrosophic ideals in BCK/BCI-algebras based on neutrosophic points. J. Inter.
Math. Virtual Inst. 2018, 8, 1–17.
16. Song, S.Z.; Smarandache, F.; Jun, Y.B. Neutrosophic commutative N -ideals in BCK-algebras. Information
2017, 8, 130. [CrossRef]
17. Agboola, A.A.A.; Davvaz, B.; Smarandache, F. Neutrosophic quadruple algebraic hyperstructures.
Ann. Fuzzy Math. Inform. 2017, 14, 29–42.
18. Akinleye, S.A.; Smarandache, F.; Agboola, A.A.A. On neutrosophic quadruple algebraic structures.
Neutrosophic Sets Syst. 2016, 12, 122–126.
19. Iséki, K. On BCI-algebras. Math. Semin. Notes 1980, 8, 125–130.

15
Axioms 2018, 7, 41

20. Iséki, K.; Tanaka, S. An introduction to the theory of BCK-algebras. Math. Jpn. 1978, 23, 1–26.
21. Huang, Y. BCI-Algebra; Science Press: Beijing, China, 2006.
22. Meng, J.; Jun, Y.B. BCK-Algebras; Kyungmoonsa Co.: Seoul, Korea, 1994.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

16
axioms
Article
Decision-Making with Bipolar Neutrosophic TOPSIS
and Bipolar Neutrosophic ELECTRE-I
Muhammad Akram 1, *, Shumaiza 1 and Florentin Smarandache 2
1 Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan;
[email protected]
2 Mathematics and Science Department, University of New Mexico, Gallup, NM 87301, USA;
[email protected]
* Correspondence: [email protected]

Received: 12 April 2018; Accepted: 11 May 2018; Published: 15 May 2018

Abstract: Technique for the order of preference by similarity to ideal solution (TOPSIS) and elimination
and choice translating reality (ELECTRE) are widely used methods to solve multi-criteria decision
making problems. In this research article, we present bipolar neutrosophic TOPSIS method and bipolar
neutrosophic ELECTRE-I method to solve such problems. We use the revised closeness degree to rank
the alternatives in our bipolar neutrosophic TOPSIS method. We describe bipolar neutrosophic TOPSIS
method and bipolar neutrosophic ELECTRE-I method by flow charts. We solve numerical examples by
proposed methods. We also give a comparison of these methods.

Keywords: neutrosophic sets; bipolar neutrosophic TOPSIS; bipolar neutrosophic ELECTRE-I;


normalized Euclidean distance

1. Introduction
The theory of fuzzy sets was introduced by Zadeh [1]. Fuzzy set theory allows objects to be members
of the set with a degree of membership, which can take any value within the unit closed interval [0, 1].
Smarandache [2] originally introduced neutrosophy, a branch of philosophy which examines the origin,
nature, and scope of neutralities, as well as their connections with different intellectual spectra. To apply
neutrosophic set in real-life problems more conveniently, Smarandache [2] and Wang et al. [3] defined
single-valued neutrosophic sets which takes the value from the subset of [0, 1]. Thus, a single-valued
neutrosophic set is an instance of neutrosophic set, and can be used feasibly to deal with real-world
problems, especially in decision support. Deli et al. [4] dealt with bipolar neutrosophic sets, which is
an extension of bipolar fuzzy sets [5].
Multi-criteria decision making (MCDM) is a process to make an ideal choice that has the highest
degree of achievement from a set of alternatives that are characterized in terms of multiple conflicting
criteria. Hwang and Yoon [6] developed the TOPSIS method, which is one of the most favorable
and effective MCDM methods to solve MCDM problems. In classical MCDM methods, the attribute
values and weights are determined precisely. To deal with problems consisting of incomplete and
vague information, in 2000 Chen [7] conferred the fuzzy version of TOPSIS method for the first time.
Chung and Chu [8] presented fuzzy TOPSIS method under group decision for facility location selection
problem. Hadi et al. [9] proposed the fuzzy inferior ratio method for multiple attribute decision making
problems. Joshi and Kumar [10] discussed the TOPSIS method based on intuitionistic fuzzy entropy
and distance measure for multi criteria decision making. A comparative study of multiple criteria
decision making methods under stochastic inputs is described by Kolios et al. [11]. Akram et al. [12–14]
considered decision support systems based on bipolar fuzzy graphs. Applications of bipolar fuzzy
sets to graphs have been discussed in [15,16]. Faizi et al. [17] presented group decision making for
hesitant fuzzy sets based on characteristic objects method. Recently, Alghamdi et al. [18] have studied

Axioms 2018, 7, 33; doi:10.3390/axioms7020033 17 www.mdpi.com/journal/axioms


Axioms 2018, 7, 33

multi-criteria decision-making methods in bipolar fuzzy environment. Dey et al. [19] considered
TOPSIS method for solving the decision making problem under bipolar neutrosophic environment.
On the other hand, the ELECTRE is one of the useful MCDM methods. This outranking
method was proposed by Benayoun et al. [20], which was later referred to as ELECTRE-I method.
Different versions of ELECTRE method have been developed as ELECTRE-I, II, III, IV and TRI.
Hatami-Marbini and Tavana [21] extended the ELECTRE-I method and gave an alternative fuzzy
outranking method to deal with uncertain and linguistic information. Aytac et al. [22] considered
fuzzy ELECTRE-I method for evaluating catering firm alternatives. Wu and Chen [23] proposed
the multi-criteria analysis approach ELECTRE based on intuitionistic fuzzy sets. In this research
article, we present bipolar neutrosophic TOPSIS method and bipolar neutrosophic ELECTRE-I method
to solve MCDM problems. We use the revised closeness degree to rank the alternatives in our
bipolar neutrosophic TOPSIS method. We describe bipolar neutrosophic TOPSIS method and bipolar
neutrosophic ELECTRE-I method by flow charts. We solve numerical examples by proposed methods.
We also give a comparison of these methods. For other notions and applications that are not mentioned
in this paper, the readers are referred to [24–29].

2. Bipolar Neutrosophic TOPSIS Method


 on C is defined as follows
Definition 1. Ref. [4] Let C be a nonempty set. A bipolar neutrosophic set (BNS) B
 
 = {c, T + (c), I + (c), F + (c), T − (c), I − (c), F − (c) | c ∈ C },
B  B  B  B  B  B  B

where, T+ (c), I + +


 ( c ), FB
 (c) : C → [0, 1] and T− (c), I − −
 ( c ), FB
 (c) : C → [−1, 0].
B B B B

We now describe our proposed bipolar neutrosophic TOPSIS method.


Let S = {S1 , S2 , · · · , Sm } be a set of m favorable alternatives and let T = { T1 , T2 , · · · , Tn } be
a set of n attributes. Let W = [w1 w2 · · · wn ] T be the weight vector such that 0 ≤ w j ≤ 1 and
n
∑ w j = 1. Suppose that the rating value of each alternative Si , (i = 1, 2, · · · , m) with respect to the
j =1
attributes Tj , ( j = 1, 2, · · · , n) is given by decision maker in the form of bipolar neutrosophic sets (BNSs).
The steps of bipolar neutrosophic TOPSIS method are described as follows:

(i) Each value of alternative is estimated with respect to n criteria. The value of each alternative
under each criterion is given in the form of BNSs and they can be expressed in the decision
matrix as
⎡ ⎤
k11 k12 ... k1n
⎢ ⎥
⎢ k21 k22 ... k2n ⎥
⎢ ⎥
K = [k ij ]m×n = ⎢ . . ... . ⎥.
⎢ ⎥
⎣ . . ... . ⎦
k m1 k m2 ... k mn

Each entry k ij =< Tij+ , Iij+ , Fij+ , Tij− , Iij− , Fij− >, where, Tij+ , Iij+ and Fij+ represent the degree of
positive truth, indeterminacy and falsity membership, respectively, whereas, Tij− , Iij− and Fij−
represent the degree of negative truth, indeterminacy and falsity membership, respectively,
such that Tij+ , Iij+ , Fij+ ∈ [0, 1], Tij− , Iij− , Fij− ∈ [−1, 0] and 0 ≤ Tij+ + Iij+ + Fij+ − Tij− − Iij− − Fij− ≤ 6,
i = 1, 2, 3, ..., m; j = 1, 2, 3, ..., n.

18
Axioms 2018, 7, 33

(ii) Suppose that the weights of the criteria are not equally assigned and they are totally unknown
to the decision maker. We use the maximizing deviation method [30] to determine the unknown
weights of the criteria. Therefore, the weight of the attribute Tj is given as

m m
∑ ∑ |k ij − k lj |
i =1l =1
wj = ,
 2
n m m
 ∑ ∑ ∑ |k ij − k |
lj
j =1 i =1l =1

and the normalized weight of the attribute Tj is given as

m m
∑ ∑ |k ij − k lj |
i =1
w∗j = n  ml =1m .
∑ ∑ ∑ |k ij − k lj |
j =1 i =1l =1

(iii) The accumulated weighted bipolar neutrosophic decision matrix is computed by multiplying
the weights of the attributes to aggregated decision matrix as follows:

⎡ w ⎤
k111 kw
12
2
... kw
1n
n

⎢ ⎥
⎢ ⎥
⎢ k w1 kw 2
kw n ⎥
⎢ 21 22 ... 2n ⎥
wj ⎢ ⎥
K ⊗ W = [k ij ]m×n =⎢ . . ... . ⎥.
⎢ ⎥
⎢ . . ... . ⎥
⎢ ⎥
⎣ ⎦
w
k m11 kw
m2
2
... kw
mn
n

where
wj wj + wj + wj + wj − wj − wj −
k ij =< Tij , Iij , Fij , Tij , Iij , Fij >
=< 1 − (1 − Tij+ )w j , ( Iij+ )w j , ( Fij+ )w j , −(− Tij− )w j , −(− Iij− )w j , −(1 − (1 − (− Fij− ))w j ) >,

(iv) Two types of attributes, benefit type attributes and cost type attributes, are mostly applicable in
real life decision making. The bipolar neutrosophic relative positive ideal solution (BNRPIS) and
bipolar neutrosophic relative negative ideal solution (BNRNIS) for both type of attributes are
defined as follows:

+ w1 + + w1 + + w1 + + w1 − + w1 − + w1 −   + w2 + + w2 + + w2 + + w2 −
BNRPIS = T1 , I1 , F1 , T1 , I1 , F1 , T2 , I2 , F2 , T2 ,
+ w2 − + w2 −
 + w + + w + + w + + w − + w − + w − 
I2 , F2 , ..., Tn n , In n , Fn n , Tn n , In n , Fn n ,
  
− w1 + − w1 + − w1 + − w1 − − w1 − − w1 − −
BNRN IS = T1 , I1 , F1 , T1 , I1 , F1 , T2w2 + ,− I2w2 + ,− F2w2 + ,− T2w2 − ,
− w2 − − w2 −
 − 
I2 , F2 , ..., Tnwn + ,− Inwn + ,− Fnwn + ,− Tnwn − ,− Inwn − ,− Fnwn − ,

19
Axioms 2018, 7, 33

such that, for benefit type criteria, j = 1, 2, ..., n


+ wj + + wj + + wj + + 
wj − + wj − + w+wj −  w+ w+
Tj , Ij , Fj , Tj = max( Tij j ), min( Iij j ), min( Fij j ),
, Ij , Fj
wj − wj − wj − 
min( Tij ), max( Iij ), max( Fij ) ,
− w j + − w j + − w j + − w j − − w j − − w j −   w+ w+ w+
Tj , Ij , Fj , Tj , Ij , Fj = min( Tij j ), max( Iij j ), max( Fij j ),
wj − wj − wj − 
max( Tij ), min( Iij ), min( Fij ) .

Similarly, for cost type criteria, j = 1, 2, ..., n


+ wj + + wj + + wj + + 
wj − + wj − + wj − 
w+ w+ w+
Tj , Ij , Fj , Tj , Ij , = min( Tij j ), max( Iij j ), max( Fij j ),
Fj
wj − wj − wj − 
max( Tij ), min( Iij ), min( Fij ) ,
− w j + − w j + − w j + − w j − − w j − − w j −   w+ w+ w+
Tj , Ij , Fj , Tj , Ij , Fj = max( Tij j ), min( Iij j ), min( Fij j ),
wj − wj − wj − 
min( Tij ), max( Iij ), max( Fij ) .

 wj + wj + wj + wj − wj − wj − 
(v) The normalized Euclidean distance of each alternative Tij , Iij , Fij , Tij , Iij , Fij from
+ w j + + w j + + w j + + w j − + w j − + w j − 
the BNRPIS Tj , Ij , Fj , Tj , Ij , Fj can be calculated as


n  ( T w j + − + T w j + )2 + ( I w j + − + I w j + )2 + ( F w j + − + F w j + )2 + 
1 ij j ij j ij j
d N (Si , BNRPIS) =  6n ∑ w− w− w− w− w− w− ,
j =1 ( Tij j −+ Tj j )2 + ( Iij j −+ Ij j )2 + ( Fij j −+ Fj j )2
 wj + wj + wj + wj − wj − wj − 
and the normalized Euclidean distance of each alternative Tij , Iij , Fij , Tij , Iij , Fij
− w j + − w j + − w j + − w j − − w j − − w j − 
from the BNRNIS Tj , Ij , Fj , Tj , Ij , Fj can be calculated as


n  ( T w j + − − T w j + )2 + ( I w j + − − I w j + )2 + ( F w j + − − F w j + )2 + 
1 ij j ij j ij j
d N (Si , BNRN IS) =  6n ∑ w− w− w− w− w− w− .
j =1 ( Tij j −− Tj j )2 + ( Iij j −− Ij j )2 + ( Fij j −− Fj j )2
(vi) Revised closeness degree of each alternative to BNRPIS represented as ρi and it is calculated
using formula

d N (Si , BNRN IS) d N (Si , BNRPIS)


ρ ( Si ) = − , i = 1, 2, ..., m.
max{d N (Si , BNRN IS)} min{d N (Si , BNRPIS)}

(vii) By using the revised closeness degrees, the inferior ratio to each alternative is determined
as follows:

ρ ( Si )
IR(i ) = .
min (ρ(Si ))
1≤ i ≤ m

It is clear that each value of IR(i ) lies in the closed unit interval [0,1].
(viii) The alternatives are ranked according to the ascending order of inferior ratio values and the best
alternative with minimum choice value is chosen.

Geometric representation of the procedure of our proposed bipolar neutrosophic TOPSIS method
is shown in Figure 1.

20
Axioms 2018, 7, 33

Technique for the order of preference by similarity


to ideal solution (TOPSIS)

Identification of alternatives and criteria

Construct bipolar neutrosophic decision matrix

Calculate weights of criteria by maximizing deviation


method

Construct weighted bipolar neutrosophic decision


matrix

Compute BNRPIS and BNRNIS

Calculate the distance of each alternative from


BNRPIS and BNRNIS

Calculate the revised closeness degree of


each alternative to BNRPIS

Calculate the inferior ratio of each alternative

Rank the alternatives according to ascending


order of inferior ratio values

Figure 1. Flow chart of bipolar neutrosophic TOPSIS.

3. Applications
In this section, we apply bipolar neutrosophic TOPSIS method to solve real life problems: the best
electronic commerce web site, heart surgeon and employee were chosen.

21
Axioms 2018, 7, 33

3.1. Electronic Commerce Web Site


Electronic Commerce (e-commerce, for short) is a process of trading the services and goods
through electronic networks like computer structures as well as the internet. In recent times e-commerce
has become a very fascinating and convenient choice for both the businesses and customers.
Many companies are interested in advancing their online stores rather than the brick and mortar
buildings, because of the appealing requirements of customers for online purchasing. Suppose that
a person wants to launch his own online store for selling his products. He will choose the e-commerce
web site that has comparatively better ratings and that is most popular among internet users. After
initial screening four web sites, S1 = Shopify, S2 = 3d Cart, S3 = BigCommerce and S4 = Shopsite,
are considered. Four attributes, T1 = Customer satisfaction, T2 = Comparative prices, T3 = On-time
delivery and T4 = Digital marketing, are designed to choose the best alternative.
Step 1. The decision matrix in the form of bipolar neutrosophic information is given as in Table 1:

Table 1. Bipolar neutrosophic decision matrix.

S T T1 T2 T3 T4
S1 (0.4, 0.2, 0.5, (0.5, 0.3, 0.3, (0.2, 0.7, 0.5, (0.4, 0.6, 0.5,
−0.6, −0.4, −0.4) −0.7, −0.2, −0.4) −0.4, −0.4, −0.3) −0.3, −0.7, −0.4)
S2 (0.3, 0.6, 0.1, (0.2, 0.6, 0.1, (0.4, 0.2, 0.5, (0.2, 0.7, 0.5,
−0.5, −0.7, −0.5) −0.5, −0.3, −0.7) −0.6, −0.3, −0.1) −0.5, −0.3, −0.2)
S3 (0.3, 0.5, 0.2, (0.4, 0.5, 0.2, (0.9, 0.5, 0.7, (0.3, 0.7, 0.6,
−0.4, −0.3, −0.7) −0.3, −0.8, −0.5) −0.3, −0.4, −0.3) −0.5, −0.5, −0.4)
S4 (0.6, 0.7, 0.5, (0.8, 0.4, 0.6, (0.6, 0.3, 0.6, (0.8, 0.3, 0.2,
−0.2, −0.1, −0.3) −0.1, −0.3, −0.4) −0.1, −0.4, −0.2) −0.1, −0.3, −0.1)

Step 2. The normalized weights of the criteria are calculated by using maximizing deviation method
as given below:

4
w1 = 0.2567, w2 = 0.2776, w3 = 0.2179, w4 = 0.2478, where ∑ w j = 1.
j =1

Step 3. The weighted bipolar neutrosophic decision matrix is constructed by multiplying the weights
to decision matrix as given in Table 2:

Table 2. Weighted bipolar neutrosophic decision matrix.

S T T1 T2 T3 T4
S1 (0.123, 0.662, 0.837, (0.175, 0.716, 0.716, (0.047, 0.925, 0.86, (0.119, 0.881, 0.842,
−0.877, −0.79, −0.123) −0.906, −0.64, −0.132) −0.819, −0.819, −0.075) −0.742, −0.915, −0.119)
S2 (0.087, 0.877, 0.554, (0.06, 0.868, 0.528, (0.105, 0.704, 0.86, (0.054, 0.915, 0.842,
−0.837, −0.913, −0.163) −0.825, −0.716, −0.284) −0.895, −0.769, −0.023) −0.842, −0.742, −0.054)
S3 (0.087, 0.837, 0.662, (0.132, 0.825, 0.64, (0.395, 0.86, 0.925, (0.085, 0.915, 0.881,
−0.79, −0.734, −0.266) −0.716, −0.94, −0.175) −0.769, −0.819, −0.075) −0.842, −0.842, −0.119)
S4 (0.21, 0.913, 0.837, (0.36, 0775, 0.868, (0.181, 0.769, 0.895, (0.329, 0.742, 0.671,
−0.662, −0.554, −0.087) −0.528, −0.716, −0.132) −0.605, −0.819, −0.047) −0.565, −0.742, −0.026)

Step 4. The BNRPIS and BNRNIS are given by

BNRPIS =< (0.21, 0.662, 0.554, −0.877, −0.554, −0.087),


(0.06, 0.868, 0.868, −0.528, −0.94, −0.284),
(0.395, 0.704, 0.86, −0.895, −0.769, −0.023),
(0.329, 0.742, 0.671, −0.842, −0.742, −0.062) >;

22
Axioms 2018, 7, 33

BNRN IS =< (0.087, 0.913, 0.837, −0.662, −0.913, −0.266),


(0.36, 0.716, 0.528, −0.906, −0.64, −0.132),
(0.047, 0.925, 0.925, −0.605, −0.819, −0.075),
(0.054, 0.915, 0.881, −0.565, −0.915, −0.119) > .

Step 5. The normalized Euclidean distances of each alternative from the BNRPISs and the BNRNISs
are given as follows:

d N (S1 , BNRPIS) = 0.1805, d N (S1 , BNRN IS) = 0.1125,


d N (S2 , BNRPIS) = 0.1672, d N (S2 , BNRN IS) = 0.1485,
d N (S3 , BNRPIS) = 0.135, d N (S3 , BNRN IS) = 0.1478,
d N (S4 , BNRPIS) = 0.155, d N (S4 , BNRN IS) = 0.1678.

Step 6. The revised closeness degree of each alternative is given as

ρ(S1 ) = −0.667, ρ(S2 ) = −0.354, ρ(S3 ) = −0.119, ρ(S4 ) = −0.148.

Step 7. The inferior ratio to each alternative is given as

IR(1) = 1, IR(2) = 0.52, IR(3) = 0.18, IR(4) = 0.22.

Step 8. Ordering the web stores according to ascending order of alternatives, we obtain: S3 < S4 <
S2 < S1 . Therefore, the person will choose the BigCommerce for opening a web store.

3.2. Heart Surgeon


Suppose that a heart patient wants to select a best cardiac surgeon for heart surgery. After initial
screening, five surgeons are considered for further evaluation. These surgeons represent the
alternatives and are denoted by S1 , S2 , S3 , S4 , and S5 in our MCDM problem. Suppose that he
concentrates on four characteristics, T1 = Availability of medical equipment, T2 = Surgeon reputation,
T3 = Expenditure and T4 = Suitability of time, in order to select the best surgeon. These characteristics
represent the criteria for this MCDM problem.

Step 1. The decision matrix in the form of bipolar neutrosophic information is given as in Table 3:

Table 3. Bipolar neutrosophic decision matrix.

S T T1 T2 T3 T4
S1 (0.6, 0.5, 0.3, (0.5, 0.7, 0.4, (0.3, 0.5, 0.5, (0.5, 0.3, 0.6,
−0.5, −0.7, −0.4) −0.6, −0.4, −0.5) −0.7, −0.3, −0.4) −0.4, −0.7, −0.5)
S2 (0.9, 0.3, 0.2, (0.7, 0.4, 0.2, (0.4, 0.7, 0.6, (0.8, 0.3, 0.2,
−0.3, −0.6, −0.5) −0.4, −0.5, −0.7) −0.6, −0.3, −0.3) −0.2, −0.5, −0.7)
S3 (0.4, 0.6, 0.6, (0.5, 0.3, 0.6, (0.7, 0.5, 0.3, (0.4, 0.6, 0.7,
−0.7, −0.4, −0.3) −0.6, −0.4, −0.4) −0.4, −0.4, −0.6) −0.5, −0.4, −0.4)
S4 (0.8, 0.5, 0.3, (0.6, 0.4, 0.3, (0.4, 0.5, 0.7, (0.5, 0.4, 0.6,
−0.3, −0.4, −0.5) −0.5, −0.7, −0.8) −0.5, −0.4, −0.2) −0.6, −0.7, −0.3)
S5 (0.6, 0.4, 0.6, (0.4, 0.7, 0.6, (0.6, 0.3, 0.5, (0.5, 0.7, 0.4,
−0.4, −0.7, −0.3) −0.7, −0.5, −0.6) −0.3, −0.7, −0.4) −0.3, −0.6, −0.5)

23
Axioms 2018, 7, 33

Step 2. The normalized weights of the criteria are calculated by using maximizing deviation method
as given below:

4
w1 = 0.2480, w2 = 0.2424, w3 = 0.2480, w4 = 0.2616, where ∑ w j = 1.
j =1

Step 3. The weighted bipolar neutrosophic decision matrix is constructed by multiplying the weights
to decision matrix as given in Table 4:

Table 4. Weighted bipolar neutrosophic decision matrix.

S T T1 T2 T3 T4
S1 (0.203, 0.842, 0.742, (0.155, 0.917, 0.801, (0.085, 0.842, 0.842, (0.166, 0.730, 0.875,
−0.842, −0.915, −0.119) −0.884, −0.801, −0.155) −0.915, −0.742, −0.119) −0.787, −0.911, −0.166)
S2 (0.435, 0.742, 0.671, (0.253, 0.801, 0.677, (0.119, 0.915, 0.881, (0.344, 0.730, 0.656,
−0.742, −0.881, −0.158) −0.801, −0.845, −0.253) −0.881, −0.742, −0.085) −0.656, −0.834, −0.270)
S3 (0.119, 0.881, 0.881, (0.155, 0.747, 0.884, (0.258, 0.842, 0.742, (0.125, 0.875, 0.911,
−0.915, −0.797, −0.085) −0.884, −0.801, −0.116) −0.797, −0.797, −0.203) −0.834, −0.787, −0.125)
S4 (0.329, 0.842, 0.742, (0.199, 0.801, 0.747, (0.119, 0.842, 0.915, (0.166, 0.787, 0.875,
−0.742, −0.797, −0.158) −0.845, −0.917, −0.323) −0.842, −0.797, −0.054) −0.875, −0.911, −0.089)
S5 (0.203, 0.797, 0.881, (0.116, 0.917, 0.884, (0.203, 0.742, 0.842, (0.166, 0.911, 0.787,
−0.797, −0.915, −0.085) −0.917, −0.845, −0.199) −0.742, −0.915, −0.119) −0.730, −0.875, −0.166)

Step 4. The BNRPIS and BNRNIS are given by

BNRPIS =< (0.435, 0.742, 0.671, −0.915, −0.797, −0.085),


(0.253, 0.747, 0.677, −0.917, −0.801, −0.116),
(0.085, 0.915, 0.915, −0.742, −0.915, −0.203),
(0.344, 0.730, 0.656, −0.875, −0.787, −0.089) >;

BNRN IS =< (0.119, 0.881, 0.881, −0.742, −0.915, −0.158),


(0.116, 0.917, 0.884, −0.801, −0.917, −0.323),
(0.258, 0.742, 0.742, −0.915, −0.742, −0.054),
(0.125, 0.911, 0.911, −0.656, −0.911, −0.270) > .

Step 5. The normalized Euclidean distances of each alternative from the BNRPISs and the BNRNISs
are given as follows:

d N (S1 , BNRPIS) = 0.1176, d N (S1 , BNRN IS) = 0.0945,


d N (S2 , BNRPIS) = 0.0974, d N (S2 , BNRN IS) = 0.1402,
d N (S3 , BNRPIS) = 0.1348, d N (S3 , BNRN IS) = 0.1043,
d N (S4 , BNRPIS) = 0.1089, d N (S4 , BNRN IS) = 0.1093,
d N (S5 , BNRPIS) = 0.1292, d N (S5 , BNRN IS) = 0.0837.

Step 6. The revised closeness degree of each alternative is given as

ρ(S1 ) = −0.553, ρ(S2 ) = 0, ρ(S3 ) = −0.64, ρ(S4 ) = −0.338, ρ(S5 ) = −0.729

Step 7. The inferior ratio to each alternative is given as

IR(1) = 0.73, IR(2) = 0, IR(3) = 0.88, IR(4) = 0.46, IR(5) = 1.

24
Axioms 2018, 7, 33

Step 8. Ordering the alternatives in ascending order, we obtain: S2 < S4 < S1 < S3 < S5 . Therefore,
S2 is best among all other alternatives.

3.3. Employee (Marketing Manager)


Process of employee selection has an analytical importance for any kind of business. According to
firm hiring requirements and the job position, this process may vary from a very simple process
to a complicated procedure. Suppose that a company wants to hire an employee for the post of
marketing manager. After initial screening, four candidates are considered as alternatives and
denoted by S1 , S2 , S3 and S4 in our MCDM problem. The requirements for this post, T1 = Confidence,
T2 = Qualification, T3 = Leading skills and T4 = Communication skills, are considered as criteria in
order to select the most relevant candidate.

Step 1. The decision matrix in the form of bipolar neutrosophic information is given as in Table 5:

Table 5. Bipolar neutrosophic decision matrix.

S T T1 T2 T3 T4
S1 (0.8, 0.5, 0.3, (0.7, 0.3, 0.2, (0.5, 0.4, 0.6, (0.9, 0.3, 0.2,
−0.3, −0.6, −0.5) −0.3, −0.5, −0.4) −0.5, −0.3, −0.4) −0.3, −0.4, −0.2)
S2 (0.5, 0.7, 0.6 (0.4, 0.7, 0.5, (0.6, 0.8, 0.5, (0.5, 0.3, 0.6,
−0.4, −0.2, −0.4) −0.6, −0.2, −0.3) −0.3, −0.5, −0.7) −0.6, −0.4, −0.3)
S3 (0.4, 0.6, 0.8, (0.6, 0.3, 0.5, (0.3, 0.5, 0.7, (0.5, 0.7, 0.4,
−0.7, −0.3, −0.4) −0.2, −0.4, −0.6) −0.8, −0.4, −0.2) −0.6, −0.3, −0.5)
S4 (0.7, 0.3, 0.5, (0.5, 0.4, 0.6, (0.6, 0.4, 0.3, (0.4, 0.5, 0.7,
−0.4, −0.2, −0.5) −0.4, −0.5, −0.3) −0.3, −0.5, −0.7) −0.6, −0.5, −0.3)

Step 2. The normalized weights of the criteria are calculated by using maximizing deviation method
as given below:

4
w1 = 0.25, w2 = 0.2361, w3 = 0.2708, w4 = 0.2431, where ∑ w j = 1.
j =1

Step 3. The weighted bipolar neutrosophic decision matrix is constructed by multiplying the weights
to decision matrix as given in Table 6:

Table 6. Weighted bipolar neutrosophic decision matrix.

S T T1 T2 T3 T4
S1 (0.3313, 0.8409, 0.7401, (0.2474, 0.7526, 0.6839, (0.1711, 0.7803, 0.8708, (0.4287, 0.7463, 0.6762,
−0.7401, −0.8801, −0.1591) −0.7526, −0.8490, −0.1136) −0.8289, −0.7218, −0.1292) −0.7463, −0.8003, −0.0528)
S2 (0.1591, 0.9147, 0.8801, (0.1136, 0.9192, 0.8490, (0.2197, 0.9414, 0.8289, (0.1551, 0.7463, 0.8832,
−0.7953, −0.6687, −0.1199) −0.8864, −0.6839, −0.0808) −0.7218, −0.8289, −0.2782) −0.8832, −0.8003, −0.0831)
S3 (0.1199, 0.8801, 0.9457, (0.1945, 0.7526, 0.8490, (0.0921, 0.8289, 0.9079, (0.1551, 0.9169, 0.8003,
−0.9147, −0.7401, −0.1199) −0.6839, −0.8055, −0.1945) −0.9414, −0.7803, −0.0586) −0.8832, −0.7463, −0.1551)
S4 (0.2599, 0.7401, 0.8409, (0.1510, 0.8055, 0.8864, (0.2197, 0.7803, 0.7218, (0.1168, 0.8449, 0.9169,
−0.7953, −0.6687, −0.1591) −0.8055, −0.8490, −0.0808) −0.7218, −0.8289, −0.2782) −0.8832, −0.8449, −0.0831)

Step 4. The BNRPIS and BNRNIS are given by

BNRPIS =< (0.3313, 0.7401, 0.7401, −0.9147, −0.6687, −0.1199),


(0.2474, 0.7526, 0.6839, −0.8864, −0.6839, −0.0808),
(0.2197, 0.7803, 0.7218, −0.9414, −0.7218, −0.0586),
(0.4287, 0.7463, 0.6762, −0.8832, −0.7463, −0.0528) >;

25
Axioms 2018, 7, 33

BNRN IS =< (0.1199, 0.9147, 0.9457, −0.7401, −0.8801, −0.1591),


(0.1136, 0.9192, 0.8864, −0.6839, −0.8490, −0.1945),
(0.0921, 0.9414, 0.9079, −0.7218, −0.8289, −0.2782),
(0.1168, 0.9169, 0.9169, −0.7463, −0.8449, −0.1551) > .

Step 5. The normalized Euclidean distances of each alternative from the BNRPISs and the BNRNISs
are given as follows:

d N (S1 , BNRPIS) = 0.0906, d N (S1 , BNRN IS) = 0.1393,


d N (S2 , BNRPIS) = 0.1344, d N (S2 , BNRN IS) = 0.0953,
d N (S3 , BNRPIS) = 0.1286, d N (S3 , BNRN IS) = 0.1011,
d N (S4 , BNRPIS) = 0.1293, d N (S4 , BNRN IS) = 0.0999.

Step 6. The revised closeness degree of each alternative is given as

ρ(S1 ) = 0, ρ(S2 ) = −0.799, ρ(S3 ) = −0.694, ρ(S4 ) = −0.780.

Step 7. The inferior ratio to each alternative is given as

IR(1) = 0, IR(2) = 1, IR(3) = 0.87, IR(4) = 0.98.

Step 8. Ordering the alternatives in ascending order, we obtain: S1 < S3 < S4 < S2 . Therefore, the
company will select the candidate S1 for this post.

4. Bipolar Neutrosophic ELECTRE-I Method


In this section, we propose bipolar neutrosophic ELECTRE-I method to solve MCDM problems.
Consider a set of alternatives, denoted by S = {S1 , S2 , S3 , · · · , Sm } and the set of criteria, denoted by
T = { T1 , T2 , T3 , · · · , Tn } which are used to evaluate the alternatives.

(i–iii) As in the section of bipolar neutrosophic TOPSIS, the rating values of alternatives with respect
to the criteria are expressed in the form of matrix [k ij ]m×n . The weights w j of the criteria Tj are
evaluated by maximizing deviation method and the weighted bipolar neutrosophic decision
wj
matrix [k ij ]m×n is constructed.
(iv) The bipolar neutrosophic concordance sets Exy and bipolar neutrosophic discordance sets Fxy
are defined as follows:

Exy = {1 ≤ j ≤ n | ρ xj ≥ ρyj }, x = y, x, y = 1, 2, · · · , m,
Fxy = {1 ≤ j ≤ n | ρ xj ≤ ρyj }, x = y, x, y = 1, 2, · · · , m,

where, ρij = Tij+ + Iij+ + Fij+ + Tij− + Iij− + Fij− , i = 1, 2, · · · , m, j = 1, 2, · · · , n.


(v) The bipolar neutrosophic concordance matrix E is constructed as follows:

⎡ ⎤
− e12 . . . e1m
⎢ e − . . . e2m ⎥
⎢ 21 ⎥
⎢ . ⎥
⎢ ⎥
E=⎢ ⎥,
⎢ . ⎥
⎢ ⎥
⎣ . ⎦
em1 em2 . . . −

26
Axioms 2018, 7, 33

where, the bipolar neutrosophic concordance indices e xy , s are determined as

exy = ∑ wj .
j∈ Exy

(vi) The bipolar neutrosophic discordance matrix F is constructed as follows:


⎡ ⎤
− f 12 . . . f 1m
⎢ f 21 − . . . f 2m ⎥
⎢ ⎥
⎢ ⎥
⎢ . ⎥
F=⎢ ⎥,
⎢ . ⎥
⎢ ⎥
⎣ . ⎦
f m1 f m2 . . . −

where, the bipolar neutrosophic discordance indices f xy , s are determined as



 wj + wj + 2 wj + wj + 2 wj + wj + 2 

1 ( Txj − Tyj ) + ( Ixj − Iyj ) + ( Fxj − Fyj ) +
max  6n w− w− w− w− w− w−
j∈ Fxy ( Txjj − Tyj j )2 + ( Ixjj − Iyjj )2 + ( Fxj j − Fyj j )2
f xy = .
 wj + wj + 2 wj + wj + 2 wj + wj + 2 
( T − T ) + ( I − I ) + ( F − F ) +
1 xj yj xj yj xj yj
max  6n w− w− w− w− w− w−
j ( Txjj − Tyj j )2 + ( Ixjj − Iyjj )2 + ( Fxj j − Fyj j )2

(vii) Concordance and discordance levels are computed to rank the alternatives. The bipolar
neutrosophic concordance level ê is defined as the average value of the bipolar neutrosophic
concordance indices as
m m
1
ê =
m ( m − 1) ∑ ∑ exy ,
x =1, y=1,
x =y y= x

similarly, the bipolar neutrosophic discordance level fˆ is defined as the average value of the
bipolar neutrosophic discordance indices as
m m
1
fˆ =
m ( m − 1) ∑ ∑ f xy .
x =1, y=1,
x =y y= x

(viii) The bipolar neutrosophic concordance dominance matrix φ on the basis of ê is determined
as follows:
⎡ ⎤
− φ12 . . . φ1m
⎢ φ − . . . φ2m ⎥
⎢ 21 ⎥
⎢ . ⎥
⎢ ⎥
φ=⎢ ⎥,
⎢ . ⎥
⎢ ⎥
⎣ . ⎦
φm1 φm2 . . . −

where, φxy is defined as



1, if exy ≥ ê,
φxy =
0, if exy < ê.

27
Axioms 2018, 7, 33

(ix) The bipolar neutrosophic discordance dominance matrix ψ on the basis of fˆ is determined
as follows:
⎡ ⎤
− ψ12 . . . ψ1m
⎢ ψ − . . . ψ2m ⎥
⎢ 21 ⎥
⎢ . ⎥
⎢ ⎥
ψ=⎢ ⎥,
⎢ . ⎥
⎢ ⎥
⎣ . ⎦
ψm1 ψm2 . . . −

where, ψxy is defined as



1, if f xy ≤ fˆ,
ψxy =
0, if f xy > fˆ.

(x) Consequently, the bipolar neutrosophic aggregated dominance matrix π is evaluated by


multiplying the corresponding entries of φ and ψ, that is
⎡ ⎤
− π12 . . . π1m
⎢ π − . . . π2m ⎥
⎢ 21 ⎥
⎢ . ⎥
⎢ ⎥
π=⎢ ⎥,
⎢ . ⎥
⎢ ⎥
⎣ . ⎦
πm1 πm2 . . . −

where, π xy is defined as

π xy = φxy ψxy .

(xi) Finally, the alternatives are ranked according to the outranking values π xy , s. That is, for each
pair of alternatives Sx and Sy , an arrow from Sx to Sy exists if and only if π xy = 1. As a result,
we have three possible cases:

(a) There exits a unique arrow from Sx into Sy .


(b) There exist two possible arrows between Sx and Sy .
(c) There is no arrow between Sx and Sy .

For case a, we decide that Sx is preferred to Sy . For the second case, Sx and Sy are indifferent,
whereas, Sx and Sy are incomparable in case c.

Geometric representation of proposed bipolar neutrosophic ELECTRE-I method is shown


in Figure 2.

28
Axioms 2018, 7, 33

Elimination and choice translating reality I


(ELECTRE-I)

Identification of alternatives and criteria

Construct bipolar neutrosophic decision matrix

Calculate the weights of criteria by maximizing


deviation method

Construct weighted bipolar neutrosophic decision


matrix

Construct bipolar neutrosophic concordance sets

Construct bipolar neutrosophic discordance sets

Compute the bipolar neutrosophic concordance


dominance matrix

Compute the bipolar neutrosophic discordance


dominance matrix

Compute the bipolar neutrosophic aggregated


dominance matrix

Sketching of decision graph

Figure 2. Flow chart of bipolar neutrosophic ELECTRE-I.

29
Axioms 2018, 7, 33

Numerical Example
In Section 3, MCDM problems are presented using the bipolar neutrosophic TOPSIS method.
In this section, we apply our proposed bipolar neutrosophic ELECTRE-I method to select the “electronic
commerce web site” to compare these two MCDM methods. Steps (1–3) have already been done in
Section 3.1. So we move on to Step 4.

Step 4. The bipolar neutrosophic concordance sets Exy , s are given as in Table 7:

Table 7. Bipolar neutrosophic concordance sets.

E xy y 1 2 3 4
E1y - {1, 2, 3} {1, 2} {}
E2y {4} - {4} {}
E3y {3, 4} {1, 2, 3} - {3}
E4y {1, 2, 3, 4} {1, 2, 3, 4} {1, 2, 4} -

Step 5. The bipolar neutrosophic discordance sets Fxy , s are given as in Table 8.

Table 8. Bipolar neutrosophic discordance sets.

Fxy y 1 2 3 4
F1y - {4} {3, 4} {1, 2, 3, 4}
F2y {1, 2, 3} - {1, 2, 3} {1, 2, 3, 4}
F3y {1, 2} {4} - {1, 2, 4}
F4y {} {} {3} -

Step 6. The bipolar neutrosophic concordance matrix E is computed as follows


⎡ ⎤
− 0.7522 0.5343 0
⎢ − ⎥
⎢ 0.2478 0.2478 0 ⎥
E=⎢ ⎥
⎣ 0.4657 0.7522 − 0.2179 ⎦
1 1 0.7821 −

Step 7. The bipolar neutrosophic discordance matrix F is computed as follows


⎡ ⎤
− 0.5826 0.9464 1
⎢ 1 − ⎥
⎢ 1 1 ⎥
F=⎢ ⎥
⎣ 1 0.3534 − 1 ⎦
0 0 0.6009 −

Step 8. The bipolar neutrosophic concordance level is ê = 0.5003 and bipolar neutrosophic discordance
level is fˆ = 0.7069. The bipolar neutrosophic concordance dominance matrix φ and bipolar
neutrosophic discordance dominance matrix ψ are as follows
⎡ ⎤ ⎡ ⎤
− 1 1 0 − 1 0 0
⎢ 0 − ⎥ ⎢ 0 − ⎥
⎢ 0 0 ⎥ ⎢ 0 0 ⎥
φ=⎢ ⎥, ψ = ⎢ ⎥.
⎣ 0 1 − 0 ⎦ ⎣ 0 1 − 0 ⎦
1 1 1 − 0 0 0 −

Step 9. The bipolar neutrosophic aggregated dominance matrix π is computed as

30
Axioms 2018, 7, 33

⎡ ⎤
− 1 0 0
⎢ 0 − ⎥
⎢ 0 0 ⎥
π=⎢ ⎥.
⎣ 0 1 − 0 ⎦
0 0 0 −

According to nonzero values of π xy , we get the alternatives in the following sequence:

S1 → S2 ← S3

Therefore, the most favorable alternatives are S3 and S1 .

5. Comparison of Bipolar Neutrosophic TOPSIS and Bipolar Neutrosophic ELECTRE-I


TOPSIS and ELECTRE-I are the most commonly used MCDM methods to solve decision making
problems, in which the best possible alternative is selected among others. The main idea of the
TOPSIS method is that the chosen alternative has the shortest distance from positive ideal solution and
the greatest distance from negative ideal solution, whereas the ELECTRE-I method is based on the
binary comparison of alternatives. The proposed MCDM methods TOPSIS and ELECTRE-I are based
on bipolar neutrosophic information. In the bipolar neutrosophic TOPSIS method, the normalized
Euclidean distance is used to compute the revised closeness coefficient of alternatives to BNRPIS and
BNRNIS. Alternatives are ranked in increasing order on the basis of inferior ratio values. Bipolar
neutrosophic TOPSIS is an effective method because it has a simple process and is able to deal with
any number of alternatives and criteria. Throughout history, one drawback of the TOPSIS method is
that more rank reversals are created by increasing the number of alternatives. The proposed bipolar
neutrosophic ELECTRE-I is an outranking relation theory that compares all pairs of alternatives and
figures out which alternatives are preferred to the others by systematically comparing them for each
criterion. The connection between different alternatives shows the bipolar neutrosophic concordance
and bipolar neutrosophic discordance behavior of alternatives. The bipolar neutrosophic TOPSIS
method gives only one possible alternative but the bipolar neutrosophic ELECTRE-I method sometimes
provides a set of alternatives as a final selection to consider the MCDM problem. Despite all of the
above comparisons, it is difficult to determine which method is most convenient, because both methods
have their own importance and can be used according to the choice of the decision maker.

6. Conclusions
A single-valued neutrosophic set as an instance of a neutrosophic set provides an additional
possibility to represent imprecise, uncertainty, inconsistent and incomplete information which exist
in real situations. Single valued neutrosophic models are more flexible and practical than fuzzy
and intuitionistic fuzzy models.We have presented the procedure, technique and implication of
TOPSIS and ELECTRE-I methods under bipolar neutrosophic environment. The rating values of
alternatives with respect to attributes are expressed in the form of BNSs. The unknown weights of
the attributes are calculated by maximizing the deviation method to construct the weighted decision
matrix. The normalized Euclidean distance is used to calculate the distance of each alternative from
BNRPIS and BNRNIS. Revised closeness degrees are computed and then the inferior ratio method
is used to rank the alternatives in bipolar neutrosophic TOPSIS. The concordance and discordance
matrices are evaluated to rank the alternatives in bipolar neutrosophic ELECTRE-I. We have also
presented some examples to explain these methods.

Author Contributions: M.A. and S. conceived and designed the experiments; F.S. analyzed the data; S. wrote
the paper.
Acknowledgments: The authors are very thankful to the editor and referees for their valuable comments and
suggestions for improving the paper.
Conflicts of Interest: The authors declare no conflicts of interest.

31
Axioms 2018, 7, 33

References
1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
2. Smarandache, F. A Unifying Field of Logics, Neutrosophy: Neutrosophic Probability, Set and Logic;
American Research Press: Rehoboth, DE, USA, 1998.
3. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single valued neutrosophic sets. In Multi-Space and
Multi-Structure; Infinite Study: New Delhi, India, 2010; Volume 4, pp. 410–413.
4. Deli, M.; Ali, F. Smarandache, Bipolar neutrosophic sets and their application based on multi-criteria decision
making problems. In Proceedings of the 2015 International Conference on Advanced Mechatronic Systems,
Beiging, China, 20–24 August 2015; pp. 249–254.
5. Zhang, W.R. Bipolar fuzzy sets and relations: A computational framework for cognitive modeling and
multiagent decision analysis. In Processing of the IEEE Conference, Fuzzy Information Processing Society
Biannual Conference, San Antonio, TX, USA, 81–21 December 1994; pp. 305–309.
6. Hwang, C.L.; Yoon, K. Multi Attribute Decision Making: Methods and Applications; Springer:
New York, NY, USA, 1981.
7. Chen, C.-T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst.
2000, 114, 1–9. [CrossRef]
8. Chu, T.C. Facility location selection using fuzzy TOPSIS under group decisions. Int. J. Uncertain. Fuzziness
Knowl.-Based Syst. 2002, 10, 687–701. [CrossRef]
9. Hadi-Vencheh, A.; Mirjaberi, M. Fuzzy inferior ratio method for multiple attribute decision making problems.
Inf. Sci. 2014, 277, 263–272. [CrossRef]
10. Joshi, D.; Kumar, S. Intuitionistic fuzzy entropy and distance measure based TOPSIS method for multi-criteria
decision making. Egypt. Inform. J. 2014, 15, 97–104. [CrossRef]
11. Kolios, A.; Mytilinou, V.; Lozano-Minguez, E.; Salonitis, K. A comparative study of multiple-criteria
decision-making methods under stochastic inputs. Energies 2016, 9, 566. [CrossRef]
12. Akram, M.; Alshehri, N.O.; Davvaz, B.; Ashraf, A. Bipolar fuzzy digraphs in decision support systems.
J. Multiple-Valued Log. Soft Comput. 2016, 27, 531–551.
13. Akram, M.; Feng, F.; Saeid, A.B.; Fotea, V. A new multiple criteria decision-making method based on bipolar
fuzzy soft graphs. Iran. J. Fuzzy Syst. 2017. [CrossRef]
14. Akram, M.; Waseem, N. Novel applications of bipolar fuzzy graphs to decision making problems. J. Appl.
Math. Comput. 2018, 56, 73–91. [CrossRef]
15. Sarwar, M.; Akram, M. Certain algorithms for computing strength of competition in bipolar fuzzy graphs.
Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2017, 25, 877–896. [CrossRef]
16. Sarwar, M.; Akram, M. Novel concepts of bipolar fuzzy competition graphs. J. Appl. Math. Comput. 2017, 54,
511–547. [CrossRef]
17. Faizi, S.; Salabun, W.; Rashid, T.; Watróbski, J.; Zafar, S. Group decision-making for hesitant fuzzy sets based
on characteristic objects method. Symmetry 2017, 9, 136. [CrossRef]
18. Alghamdi, M.A.; Alshehri, N.O.; Akram, M. Multi-criteria decision-making methods in bipolar fuzzy
environment. Int. J. Fuzzy Syst. 2018. [CrossRef]
19. Dey, P.P.; Pramanik, S.; Giri, B.C. TOPSIS for solving multi-attribute decision making problems under
bipolar neutrosophic environment. In New Trends in Neutrosophic Theory and Applications; Pons Editions:
Brussels, Belgium, 2016; pp. 65–77.
20. Benayoun, R.; Roy, B.; Sussman, N. Manual de Reference du Programme Electre; Note de Synthese et Formation,
No. 25; Direction Scientific SEMA: Paris, France, 1966.
21. Hatami-Marbini, A.; Tavana, M. An extension of the ELECTRE method for group decision making under
a fuzzy environment. Omega 2011, 39, 373–386. [CrossRef]
22. Aytaç, E.; Isik, , A.T.; Kundaki, N. Fuzzy ELECTRE I method for evaluating catering firm alternatives.
Ege Acad. Rev. 2011, 11, 125–134.
23. Wu, M.-C.; Chen, T.-Y. The ELECTRE multicriteria analysis approach based on Attanssov’s intuitionistic
fuzzy sets. Expert Syst. Appl. 2011, 38, 12318–12327. [CrossRef]
24. Chu, T-C. Selecting plant location via a fuzzy TOPSIS approach. Int. J. Adv. Manuf. Technol. 2002, 20, 859–864.
25. Guarini, M.R.; Battisti, F.; Chiovitti, A. A Methodology for the selection of multi-criteria decision analysis
methods in real estate and land management processes. Sustainability 2018, 10, 507. [CrossRef]

32
Axioms 2018, 7, 33

26. Hatefi, S.M.; Tamošaitiene, J. Construction projects assessment based on the sustainable development criteria
by an integrated fuzzy AHP and improved GRA model. Sustainability 2018, 10, 991. [CrossRef]
27. Huang, Y.P.; Basanta, H.; Kuo, H.C.; Huang, A. Health symptom checking system for elderly people using
fuzzy analytic hierarchy process. Appl. Syst. Innov. 2018, 1, 10. [CrossRef]
28. Salabun, W.; Piegat, A. Comparative analysis of MCDM methods for the assessment of mortality in patients
with acute coronary syndrome. Artif. Intell. Rev. 2017, 48, 557–571. [CrossRef]
29. Triantaphyllou, E. Multi-Criteria Decision Making Methods, Multi-Criteria Decision Making Methods:
A Comparative Study; Springer: Boston, MA, USA, 2000; pp. 5–21.
30. Yang, Y.M. Using the method of maximizing deviations to make decision for multi-indices.
Syst. Eng. Electron. 1998, 7, 24–31.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

33
axioms
Article
Interval Neutrosophic Sets with Applications in
BCK/BCI-Algebra
Young Bae Jun 1, *, Seon Jeong Kim 2 and Florentin Smarandache 3
1 Department of Mathematics Education Gyeongsang National University, Jinju 52828, Korea
2 Department of Mathematics, Natural Science of College, Gyeongsang National University,
Jinju 52828, Korea; [email protected]
3 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
* Correspondence: [email protected]

Received: 27 February 2018; Accepted: 06 April 2018; Published: 9 April 2018

Abstract: For i, j, k, l, m, n ∈ {1, 2, 3, 4}, the notion of ( T (i, j), I (k, l ), F (m, n))-interval neutrosophic
subalgebra in BCK/BCI-algebra is introduced, and their properties and relations are investigated.
The notion of interval neutrosophic length of an interval neutrosophic set is also introduced, and
related properties are investigated.

Keywords: interval neutrosophic set; interval neutrosophic subalgebra; interval neutrosophic length

MSC: 06F35, 03G25, 03B52

1. Introduction
Intuitionistic fuzzy set, which is introduced by Atanassov [1], is a generalization of Zadeh’s
fuzzy sets [2], and consider both truth-membership and falsity-membership. Since the sum of degree
true, indeterminacy and false is one in intuitionistic fuzzy sets, incomplete information is handled
in intuitionistic fuzzy sets. On the other hand, neutrosophic sets can handle the indeterminate
information and inconsistent information that exist commonly in belief systems in a neutrosophic
set since indeterminacy is quantified explicitly and truth-membership, indeterminacy-membership
and falsity-membership are independent, which is mentioned in [3]. As a formal framework that
generalizes the concept of the classic set, fuzzy set, interval valued fuzzy set, intuitionistic fuzzy set,
interval valued intuitionistic fuzzy set and paraconsistent set, etc., the neutrosophic set is developed by
Smarandache [4,5], which is applied to various parts, including algebra, topology, control theory,
decision-making problems, medicines and in many real-life problems. The concept of interval
neutrosophic sets is presented by Wang et al. [6], and it is more precise and more flexible than
the single-valued neutrosophic set. The interval neutrosophic set can represent uncertain, imprecise,
incomplete and inconsistent information, which exists in the real world. BCK-algebra is introduced by
Imai and Iséki [7], and it has been applied to several branches of mathematics, such as group theory,
functional analysis, probability theory and topology, etc. As a generalization of BCK-algebra, Iséki
introduced the notion of BCI-algebra (see [8]).
In this article, we discuss interval neutrosophic sets in BCK/BCI-algebra. We introduce the notion
of ( T (i, j), I (k, l ), F (m, n))-interval neutrosophic subalgebra in BCK/BCI-algebra for i, j, k, l, m, n ∈
{1, 2, 3, 4}, and investigate their properties and relations. We also introduce the notion of interval
neutrosophic length of an interval neutrosophic set, and investigate related properties.

Axioms 2018, 7, 23; doi:10.3390/axioms7020023 34 www.mdpi.com/journal/axioms


Axioms 2018, 7, 23

2. Preliminaries
By a BCI-algebra, we mean a system X := ( X, ∗, 0) ∈ K (τ ) in which the following axioms hold:

(I) (( x ∗ y) ∗ ( x ∗ z)) ∗ (z ∗ y) = 0,
(II) ( x ∗ ( x ∗ y)) ∗ y = 0,
(III) x ∗ x = 0,
(IV) x∗y = y∗x = 0 ⇒ x = y

for all x, y, z ∈ X. If a BCI-algebra X satisfies 0 ∗ x = 0 for all x ∈ X, then we say that X is BCK-algebra.
A non-empty subset S of a BCK/BCI-algebra X is called a subalgebra of X if x ∗ y ∈ S for all
x, y ∈ S.
The collection of all BCK-algebra and all BCI-algebra are denoted by BK ( X ) and B I ( X ),
respectively. In addition, B( X ) := BK ( X ) ∪ B I ( X ).
We refer the reader to the books [9,10] for further information regarding BCK/BCI-algebra.
By a fuzzy structure over a nonempty set X, we mean an ordered pair ( X, ρ) of X and a fuzzy set ρ
on X.

Definition 1 ([11]). For any ( X, ∗, 0) ∈ B( X ), a fuzzy structure ( X, μ) over ( X, ∗, 0) is called a

• fuzzy subalgebra of ( X, ∗, 0) with type 1 (briefly, 1-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≥ min{μ( x ), μ(y)}) , (1)

• fuzzy subalgebra of ( X, ∗, 0) with type 2 (briefly, 2-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≤ min{μ( x ), μ(y)}) , (2)

• fuzzy subalgebra of ( X, ∗, 0) with type 3 (briefly, 3-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≥ max{μ( x ), μ(y)}) , (3)

• fuzzy subalgebra of ( X, ∗, 0) with type 4 (briefly, 4-fuzzy subalgebra of ( X, ∗, 0)) if

(∀ x, y ∈ X ) (μ( x ∗ y) ≤ max{μ( x ), μ(y)}) . (4)

Let X be a non-empty set. A neutrosophic set (NS) in X (see [4]) is a structure of the form:

A := { x; A T ( x ), A I ( x ), A F ( x ) | x ∈ X },

where A T : X → [0, 1] is a truth-membership function, A I : X → [0, 1] is an indeterminate membership


function, and A F : X → [0, 1] is a false membership function.
An interval neutrosophic set (INS) A in X is characterized by truth-membership function TA ,
indeterminacy membership function I A and falsity-membership function FA . For each point x in X,
TA ( x ), I A ( x ), FA ( x ) ∈ [0, 1] (see [3,6]).

3. Interval Neutrosophic Subalgebra


In what follows, let ( X, ∗, 0) ∈ B( X ) and P ∗ ([0, 1]) be the family of all subintervals of [0, 1] unless
otherwise specified.

Definition 2 ([3,6]). An interval neutrosophic set in a nonempty set X is a structure of the form:

I := { x, I[ T ]( x ), I[ I ]( x ), I[ F ]( x ) | x ∈ X },

35
Axioms 2018, 7, 23

where
I[ T ] : X → P ∗ ([0, 1]),
which is called interval truth-membership function,

I[ I ] : X → P ∗ ([0, 1]),

which is called interval indeterminacy-membership function, and

I[ F ] : X → P ∗ ([0, 1]),

which is called interval falsity-membership function.

For the sake of simplicity, we will use the notation I := (I[ T ], I[ I ], I[ F ]) for the interval
neutrosophic set
I := { x, I[ T ]( x ), I[ I ]( x ), I[ F ]( x ) | x ∈ X }.
Given an interval neutrosophic set I := (I[T ], I[ I ], I[ F]) in X, we consider the following functions:

I[ T ]inf : X → [0, 1], x → inf{I[ T ]( x )},


I[ I ]inf : X → [0, 1], x → inf{I[ I ]( x )},
I[ F ]inf : X → [0, 1], x → inf{I[ F ]( x )},

and

I[ T ]sup : X → [0, 1], x → sup{I[ T ]( x )},


I[ I ]sup : X → [0, 1], x → sup{I[ I ]( x )},
I[ F ]sup : X → [0, 1], x → sup{I[ F ]( x )}.

Definition 3. For any i, j, k, l, m, n ∈ {1, 2, 3, 4}, an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X


is called a ( T (i, j), I (k, l ), F (m, n))-interval neutrosophic subalgebra of X if the following assertions are valid.

(1) ( X, I[ T ]inf ) is an i-fuzzy subalgebra of ( X, ∗, 0) and ( X, I[ T ]sup ) is a j-fuzzy subalgebra of ( X, ∗, 0),


(2) ( X, I[ I ]inf ) is a k-fuzzy subalgebra of ( X, ∗, 0) and ( X, I[ I ]sup ) is an l-fuzzy subalgebra of ( X, ∗, 0),
(3) ( X, I[ F ]inf ) is an m-fuzzy subalgebra of ( X, ∗, 0) and ( X, I[ F ]sup ) is an n-fuzzy subalgebra of ( X, ∗, 0).

Example 1. Consider a BCK-algebra X = {0, 1, 2, 3} with the binary operation ∗, which is given in Table 1
(see [10]).

Table 1. Cayley table for the binary operation “∗”.

∗ 0 1 2 3
0 0 0 0 0
1 1 0 0 1
2 2 1 0 2
3 3 3 3 0

(1) Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in ( X, ∗, 0) for which I[ T ], I[ I ] and I[ F ]


are given as follows: ⎧
⎪ [0.4, 0.5) if x = 0,


⎨ (0.3, 0.5] if x = 1,

I[ T ] : X → P ([0, 1]) x →

⎪ [0.2, 0.6) if x = 2,


[0.1, 0.7] if x = 3,

36
Axioms 2018, 7, 23



⎪ [0.5, 0.8) if x = 0,

⎨ (0.2, 0.7)
∗ if x = 1,
I[ I ] : X → P ([0, 1]) x →

⎪ [0.5, 0.6] if x = 2,


[0.4, 0.8) if x = 3,
and ⎧

⎪ [0.4, 0.5) if x = 0,

⎨ (0.2, 0.9)
∗ if x = 1,
I[ F ] : X → P ([0, 1]) x →

⎪ [0.1, 0.6] if x = 2,


(0.4, 0.7] if x = 3.
It is routine to verify that I := (I[ T ], I[ I ], I[ F ]) is a ( T (1, 4), I (1, 4), F (1, 4))-interval neutrosophic
subalgebra of ( X, ∗, 0).
(2) Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in ( X, ∗, 0) for which I[ T ], I[ I ] and I[ F ]
are given as follows: ⎧
⎪ [0.1, 0.4) if x = 0,


⎨ (0.3, 0.5) if x = 1,

I[ T ] : X → P ([0, 1]) x →

⎪ [0.2, 0.7] if x = 2,


[0.4, 0.6) if x = 3,


⎪ (0.2, 0.5) if x = 0,

⎨ [0.5, 0.8] if x = 1,
I[ I ] : X → P ∗ ([0, 1]) x →
⎪ (0.4, 0.5] if x = 2,



[0.2, 0.6] if x = 3,
and ⎧

⎪ [0.3, 0.4) if x = 0,

⎨ (0.4, 0.7) if x = 1,
I[ F ] : X → P ∗ ([0, 1]) x →
⎪ (0.6, 0.8)
⎪ if x = 2,


[0.4, 0.6] if x = 3.
By routine calculations, we know that I := (I[ T ], I[ I ], I[ F ]) is a ( T (4, 4), I (4, 4), F (4, 4))-interval
neutrosophic subalgebra of ( X, ∗, 0).

Example 2. Consider a BCI-algebra X = {0, a, b, c} with the binary operation ∗, which is given in Table 2
(see [10]).

Table 2. Cayley table for the binary operation “∗”.

∗ 0 a b c
0 0 a b c
a a 0 c b
b b c 0 a
c c b a 0

Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in ( X, ∗, 0) for which I[ T ], I[ I ] and I[ F ] are


given as follows: ⎧
⎪ [0.3, 0.9) if x = 0,


⎨ (0.7, 0.9) if x = a,

I[ T ] : X → P ([0, 1]) x →

⎪ [0.7, 0.8) if x = b,


(0.5, 0.8] if x = c,

37
Axioms 2018, 7, 23



⎪ [0.2, 0.65) if x = 0,

⎨ [0.5, 0.55]
∗ if x = a,
I[ I ] : X → P ([0, 1]) x →

⎪ (0.6, 0.65) if x = b,


[0.5, 0.55) if x = c,
and ⎧

⎪ (0.3, 0.6) if x = 0,

⎨ [0.4, 0.6]
∗ if x = a,
I[ F ] : X → P ([0, 1]) x →

⎪ (0.4, 0.5] if x = b,


[0.3, 0.5) if x = c.
Routine calculations show that I := (I[ T ], I[ I ], I[ F ]) is a ( T (4, 1), I (4, 1), F (4, 1))-interval
neutrosophic subalgebra of ( X, ∗, 0). However, it is not a ( T (2, 1), I (2, 1), F (2, 1))-interval neutrosophic
subalgebra of ( X, ∗, 0) since

I[ T ]inf (c ∗ a) = I[ T ]inf (b) = 0.7 > 0.5 = min{I[ T ]inf (c), I[ T ]inf ( a)}

and/or
I[ I ]inf ( a ∗ c) = I[ I ]inf (b) = 0.6 > 0.5 = min{I[ I ]inf ( a), I[ I ]inf (c)}.
In addition, it is not a ( T (4, 3), I (4, 3), F (4, 3))-interval neutrosophic subalgebra of ( X, ∗, 0) since

I[ T ]sup ( a ∗ b) = I[ T ]sup (c) = 0.8 < 0.9 = max{I[ T ]inf ( a), I[ T ]inf (c)}

and/or
I[ F ]sup ( a ∗ b) = I[ F ]sup (c) = 0.5 < 0.6 = max{I[ F ]inf ( a), I[ F ]inf (c)}.

Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in X. We consider the following sets:

U (I[ T ]inf ; α I ) := { x ∈ X | I[ T ]inf ( x ) ≥ α I },


L(I[ T ]sup ; αS ) := { x ∈ X | I[ T ]sup ( x ) ≤ αS },
U (I[ I ]inf ; β I ) := { x ∈ X | I[ I ]inf ( x ) ≥ β I },
L(I[ I ]sup ; β S ) := { x ∈ X | I[ I ]sup ( x ) ≤ β S },

and

U (I[ F ]inf ; γ I ) := { x ∈ X | I[ F ]inf ( x ) ≥ γ I },


L(I[ F ]sup ; γS ) := { x ∈ X | I[ F ]sup ( x ) ≤ γS },

where α I , αS , β I , β S , γ I and γS are numbers in [0, 1].

Theorem 1. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 4), I (i, 4), F (i, 4))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {1, 3}, then U (I[ T ]inf ; α I ), L(I[ T ]sup ; αS ), U (I[ I ]inf ; β I ),
L(I[ I ]sup ; β S ), U (I[ F ]inf ; γ I ) and L(I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0) for all α I , αS , β I ,
β S , γ I , γS ∈ [0, 1].

Proof. Assume that I := (I[ T ], I[ I ], I[ F ]) is a ( T (1, 4), I (1, 4), F (1, 4))-interval neutrosophic
subalgebra of ( X, ∗, 0). Then, ( X, I[ T ]inf ), ( X, I[ I ]inf ) and ( X, I[ F ]inf ) are 1-fuzzy subalgebra of X;
and ( X, I[ T ]sup ), ( X, I[ I ]sup ) and ( X, I[ F ]sup ) are 4-fuzzy subalgebra of X. Let α I , αS ∈ [0, 1] be such
that U (I[ T ]inf ; α I ) and L(I[ T ]sup ; αS ) are nonempty. For any x, y ∈ X, if x, y ∈ U (I[ T ]inf ; α I ), then
I[ T ]inf ( x ) ≥ α I and I[ T ]inf (y) ≥ α I , and so

I[ T ]inf ( x ∗ y) ≥ min{I[ T ]inf ( x ), I[ T ]inf (y)} ≥ α I ,

38
Axioms 2018, 7, 23

that is, x ∗ y ∈ U (I[ T ]inf ; α I ). If x, y ∈ L(I[ T ]sup ; αS ), then I[ T ]sup ( x ) ≤ αS and I[ T ]sup (y) ≤ αS ,
which imply that
I[ T ]sup ( x ∗ y) ≤ max{I[ T ]sup ( x ), I[ T ]sup (y)} ≤ αS ,
that is, x ∗ y ∈ L(I[ T ]sup ; αS ). Hence, U (I[ T ]inf ; α I ) and L(I[ T ]sup ; αS ) are subalgebra of ( X, ∗, 0)
for all α I , αS ∈ [0, 1]. Similarly, we can prove that U (I[ I ]inf ; β I ), L(I[ I ]sup ; β S ), U (I[ F ]inf ; γ I ) and
L(I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0) for all β I , β S , γ I , γS ∈ [0, 1]. Suppose
that I := (I[ T ], I[ I ], I[ F ]) is a ( T (3, 4), I (3, 4), F (3, 4))-interval neutrosophic subalgebra of
( X, ∗, 0). Then, ( X, I[ T ]inf ), ( X, I[ I ]inf ) and ( X, I[ F ]inf ) are 3-fuzzy subalgebra of X; and ( X, I[ T ]sup ),
( X, I[ I ]sup ) and ( X, I[ F ]sup ) are 4-fuzzy subalgebra of X. Let β I and β S ∈ [0, 1] be such that
U (I[ I ]inf ; β I ) and L(I[ I ]sup ; β S ) are nonempty. Let x, y ∈ U (I[ I ]inf ; β I ). Then, I[ I ]inf ( x ) ≥ β I and
I[ I ]inf (y) ≥ β I . It follows that

I[ I ]inf ( x ∗ y) ≥ max{I[ I ]inf ( x ), I[ I ]inf (y)} ≥ β I

and so x ∗ y ∈ U (I[ I ]inf ; β I ). Thus, U (I[ I ]inf ; β I ) is a subalgebra of ( X, ∗, 0). If x, y ∈ L(I[ I ]inf ; β S ),
then I[ I ]inf ( x ) ≤ β S and I[ I ]inf (y) ≤ β S . Hence,

I[ I ]inf ( x ∗ y) ≤ max{I[ I ]inf ( x ), I[ I ]inf (y)} ≤ β S ,

and so x ∗ y ∈ L(I[ I ]inf ; β S ). Thus, L(I[ I ]inf ; β S ) is a subalgebra of ( X, ∗, 0). Similarly, we can show
that U (I[ T ]inf ; α I ), L(I[ T ]sup ; αS ), U (I[ F ]inf ; γ I ) and L(I[ F ]sup ; γS ) are either empty or subalgebra of
( X, ∗, 0) for all α I , αS , γ I , γS ∈ [0, 1].

Since every 2-fuzzy subalgebra is a 4-fuzzy subalgebra, we have the following corollary.

Corollary 1. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 2), I (i, 2), F (i, 2))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {1, 3}, then U (I[ T ]inf ; α I ), L(I[ T ]sup ; αS ), U (I[ I ]inf ; β I ),
L(I[ I ]sup ; β S ), U (I[ F ]inf ; γ I ) and L(I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0) for all α I , αS , β I ,
β S , γ I , γS ∈ [0, 1].

By a similar way to the proof of Theorem 1, we have the following theorems.

Theorem 2. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 4), I (i, 4), F (i, 4))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then L(I[ T ]inf ; α I ), L(I[ T ]sup ; αS ), L(I[ I ]inf ; β I ),
L(I[ I ]sup ; β S ), L(I[ F ]inf ; γ I ) and L(I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0) for all α I , αS , β I ,
β S , γ I , γS ∈ [0, 1].

Corollary 2. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 2), I (i, 2), F (i, 2))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then L(I[ T ]inf ; α I ), L(I[ T ]sup ; αS ), L(I[ I ]inf ; β I ),
L(I[ I ]sup ; β S ), L(I[ F ]inf ; γ I ) and L(I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0) for all α I , αS , β I ,
β S , γ I , γS ∈ [0, 1].

Theorem 3. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (k, 1), I (k, 1), F (k, 1))-interval
neutrosophic subalgebra of ( X, ∗, 0) for k ∈ {1, 3}, then U (I[ T ]inf ; α I ), U (I[ T ]sup ; αS ), U (I[ I ]inf ; β I ),
U (I[ I ]sup ; β S ), U (I[ F ]inf ; γ I ) and U (I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0) for all α I , αS ,
β I , β S , γ I , γS ∈ [0, 1].

Corollary 3. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (k, 3), I (k, 3),


F (k, 3))-interval neutrosophic subalgebra of ( X, ∗, 0) for k ∈ {1, 3}, then U (I[ T ]inf ; α I ), U (I[ T ]sup ; αS ),
U (I[ I ]inf ; β I ), U (I[ I ]sup ; β S ), U (I[ F ]inf ; γ I ) and U (I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0)
for all α I , αS , β I , β S , γ I , γS ∈ [0, 1].

39
Axioms 2018, 7, 23

Theorem 4. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (k, 1), I (k, 1), F (k, 1))-interval
neutrosophic subalgebra of ( X, ∗, 0) for k ∈ {2, 4}, then L(I[ T ]inf ; α I ), U (I[ T ]sup ; αS ), L(I[ I ]inf ; β I ),
U (I[ I ]sup ; β S ), L(I[ F ]inf ; γ I ) and U (I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0) for all α I , αS ,
β I , β S , γ I , γS ∈ [0, 1].

Corollary 4. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (k, 3), I (k, 3),


F (k, 3))-interval neutrosophic subalgebra of ( X, ∗, 0) for k ∈ {2, 4}, then L(I[ T ]inf ; α I ), U (I[ T ]sup ; αS ),
L(I[ I ]inf ; β I ), U (I[ I ]sup ; β S ), L(I[ F ]inf ; γ I ) and U (I[ F ]sup ; γS ) are either empty or subalgebra of ( X, ∗, 0)
for all α I , αS , β I , β S , γ I , γS ∈ [0, 1].

Theorem 5. Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in X in which U (I[ T ]inf ; α I ),


L(I[ T ]sup ; αS ), U (I[ I ]inf ; β I ), L(I[ I ]sup ; β S ), U (I[ F ]inf ; γ I ) and L(I[ F ]sup ; γS ) are nonempty subalgebra
of ( X, ∗, 0) for all α I , αS , β I , β S , γ I , γS ∈ [0, 1]. Then, I := (I[ T ], I[ I ], I[ F ]) is a ( T (1, 4), I (1, 4),
F (1, 4))-interval neutrosophic subalgebra of ( X, ∗, 0).

Proof. Suppose that ( X, I[ T ]inf ) is not a 1-fuzzy subalgebra of ( X, ∗, 0). Then, there exists x, y ∈ X
such that
I[ T ]inf ( x ∗ y) < min{I[ T ]inf ( x ), I[ T ]inf (y)}.
If we take α I = min{I[ T ]inf ( x ), I[ T ]inf (y)}, then x, y ∈ U (I[ T ]inf ; α I ), but x ∗ y ∈
/ U (I[ T ]inf ; α I ).
This is a contradiction, and so ( X, I[ T ]inf ) is a 1-fuzzy subalgebra of ( X, ∗, 0). If ( X, I[ T ]sup ) is not a
4-fuzzy subalgebra of ( X, ∗, 0), then

I[ T ]sup ( a ∗ b) > max{I[ T ]sup ( a), I[ T ]sup (b)}

for some a, b ∈ X, and so a, b ∈ L(I[ T ]sup ; αS ) and a ∗ b ∈


/ L(I[ T ]sup ; αS ) by taking

αS := max{I[ T ]sup ( a), I[ T ]sup (b)}.

This is a contradiction, and therefore ( X, I[ T ]sup ) is a 4-fuzzy subalgebra of ( X, ∗, 0). Similarly, we


can verify that ( X, I[ I ]inf ) is a 1-fuzzy subalgebra of ( X, ∗, 0) and ( X, I[ I ]sup ) is a 4-fuzzy subalgebra
of ( X, ∗, 0); and ( X, I[ F ]inf ) is a 1-fuzzy subalgebra of ( X, ∗, 0) and ( X, I[ F ]sup ) is a 4-fuzzy subalgebra
of ( X, ∗, 0). Consequently, I := (I[ T ], I[ I ], I[ F ]) is a ( T (1, 4), I (1, 4), F (1, 4))-interval neutrosophic
subalgebra of ( X, ∗, 0).

Using the similar method to the proof of Theorem 5, we get the following theorems.

Theorem 6. Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in X in which L(I[ T ]inf ; α I ),


U (I[ T ]sup ; αS ), L(I[ I ]inf ; β I ), U (I[ I ]sup ; β S ), L(I[ F ]inf ; γ I ) and U (I[ F ]sup ; γS ) are nonempty subalgebra
of ( X, ∗, 0) for all α I , αS , β I , β S , γ I , γS ∈ [0, 1]. Then, I := (I[ T ], I[ I ], I[ F ]) is a ( T (4, 1), I (4, 1),
F (4, 1))-interval neutrosophic subalgebra of ( X, ∗, 0).

Theorem 7. Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in X in which L(I[ T ]inf ; α I ),


L(I[ T ]sup ; αS ), L(I[ I ]inf ; β I ), L(I[ I ]sup ; β S ), L(I[ F ]inf ; γ I ) and L(I[ F ]sup ; γS ) are nonempty subalgebra
of ( X, ∗, 0) for all α I , αS , β I , β S , γ I , γS ∈ [0, 1]. Then, I := (I[ T ], I[ I ], I[ F ]) is a ( T (4, 4), I (4, 4),
F (4, 4))-interval neutrosophic subalgebra of ( X, ∗, 0).

Theorem 8. Let I := (I[ T ], I[ I ], I[ F ]) be an interval neutrosophic set in X in which U (I[ T ]inf ; α I ),


U (I[ T ]sup ; αS ), U (I[ I ]inf ; β I ), U (I[ I ]sup ; β S ), U (I[ F ]inf ; γ I ) and U (I[ F ]sup ; γS ) are nonempty subalgebra
of ( X, ∗, 0) for all α I , αS , β I , β S , γ I , γS ∈ [0, 1]. Then, I := (I[ T ], I[ I ], I[ F ]) is a ( T (1, 1), I (1, 1),
F (1, 1))-interval neutrosophic subalgebra of ( X, ∗, 0).

40
Axioms 2018, 7, 23

4. Interval Neutrosophic Lengths


Definition 4. Given an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X, we define the interval
neutrosophic length of I as an ordered triple I := (I[ T ] , I[ I ] , I[ F ] ) where

I[ T ] : X → [0, 1], x → I[ T ]sup ( x ) − I[ T ]inf ( x ),


I[ I ] : X → [0, 1], x → I[ I ]sup ( x ) − I[ I ]inf ( x ),

and
I[ F ] : X → [0, 1], x → I[ F ]sup ( x ) − I[ F ]inf ( x ),
which are called interval neutrosophic T-length, interval neutrosophic I-length and interval neutrosophic
F-length of I , respectively.

Example 3. Consider the interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X, which is given in Example 2.
Then, the interval neutrosophic length of I is given by Table 3.

Table 3. Interval neutrosophic length of I .

X I [ T ] I [ I ] I [ F ]
0 0.6 0.45 0.3
a 0.2 0.05 0.2
b 0.1 0.05 0.1
c 0.3 0.05 0.2

Theorem 9. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (i, 3), F (i, 3))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 3-fuzzy
subalgebra of ( X, ∗, 0).

Proof. Assume that I := (I[ T ], I[ I ], I[ F ]) is a ( T (2, 3), I (2, 3), F (2, 3))-interval neutrosophic
subalgebra of ( X, ∗, 0). Then, ( X, I[ T ]inf ), ( X, I[ I ]inf ) and ( X, I[ F ]inf ) are 2-fuzzy subalgebra of X, and
( X, I[ T ]sup ), ( X, I[ I ]sup ) and ( X, I[ F ]sup ) are 3-fuzzy subalgebra of X. Thus,

I[ T ]inf ( x ∗ y) ≤ min{I[ T ]inf ( x ), I[ T ]inf (y)},


I[ I ]inf ( x ∗ y) ≤ min{I[ I ]inf ( x ), I[ I ]inf (y)},
I[ F ]inf ( x ∗ y) ≤ min{I[ F ]inf ( x ), I[ F ]inf (y)},

and

I[ T ]sup ( x ∗ y) ≥ max{I[ T ]sup ( x ), I[ T ]sup (y)},


I[ I ]sup ( x ∗ y) ≥ max{I[ I ]sup ( x ), I[ I ]sup (y)},
I[ F ]sup ( x ∗ y) ≥ max{I[ F ]sup ( x ), I[ F ]sup (y)},

for all x, y ∈ X. It follows that

I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≥ I[ T ]sup ( x ) − I[ T ]inf ( x ) = I[ T ] ( x ),


I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≥ I[ T ]sup (y) − I[ T ]inf (y) = I[ T ] (y),
I[ I ] ( x ∗ y) = I[ I ]sup ( x ∗ y) − I[ I ]inf ( x ∗ y) ≥ I[ I ]sup ( x ) − I[ I ]inf ( x ) = I[ I ] ( x ),
I[ I ] ( x ∗ y) = I[ I ]sup ( x ∗ y) − I[ I ]inf ( x ∗ y) ≥ I[ I ]sup (y) − I[ I ]inf (y) = I[ I ] (y),

41
Axioms 2018, 7, 23

and

I[ F ] ( x ∗ y) = I[ F ]sup ( x ∗ y) − I[ F ]inf ( x ∗ y) ≥ I[ F ]sup ( x ) − I[ F ]inf ( x ) = I[ F ] ( x ),


I[ F ] ( x ∗ y) = I[ F ]sup ( x ∗ y) − I[ F ]inf ( x ∗ y) ≥ I[ F ]sup (y) − I[ F ]inf (y) = I[ F ] (y).

Hence,

I[ T ] ( x ∗ y) ≥ max{I[ T ] ( x ), I[ T ] (y)},
I[ I ] ( x ∗ y) ≥ max{I[ I ] ( x ), I[ I ] (y)},

and
I[ F ] ( x ∗ y) ≥ max{I[ F ] ( x ), I[ F ] (y)},
for all x, y ∈ X. Therefore, ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 3-fuzzy subalgebra of ( X, ∗, 0).
Suppose that I := (I[ T ], I[ I ], I[ F ]) is a ( T (4, 3), I (4, 3), F (4, 3))-interval neutrosophic subalgebra
of ( X, ∗, 0). Then, ( X, I[ T ]inf ), ( X, I[ I ]inf ) and ( X, I[ F ]inf ) are 4-fuzzy subalgebra of X, and
( X, I[ T ]sup ), ( X, I[ I ]sup ) and ( X, I[ F ]sup ) are 3-fuzzy subalgebra of X. Hence,

I[ T ]inf ( x ∗ y) ≤ max{I[ T ]inf ( x ), I[ T ]inf (y)},


I[ I ]inf ( x ∗ y) ≤ max{I[ I ]inf ( x ), I[ I ]inf (y)}, (5)
I[ F ]inf ( x ∗ y) ≤ max{I[ F ]inf ( x ), I[ F ]inf (y)},

and

I[ T ]sup ( x ∗ y) ≥ max{I[ T ]sup ( x ), I[ T ]sup (y)},


I[ I ]sup ( x ∗ y) ≥ max{I[ I ]sup ( x ), I[ I ]sup (y)},
I[ F ]sup ( x ∗ y) ≥ max{I[ F ]sup ( x ), I[ F ]sup (y)},

for all x, y ∈ X. Label (5) implies that

I[ T ]inf ( x ∗ y) ≤ I[ T ]inf ( x ) or I[ T ]inf ( x ∗ y) ≤ I[ T ]inf (y),


I[ I ]inf ( x ∗ y) ≤ I[ I ]inf ( x ) or I[ I ]inf ( x ∗ y) ≤ I[ I ]inf (y),
I[ F ]inf ( x ∗ y) ≤ I[ F ]inf ( x ) or I[ F ]inf ( x ∗ y) ≤ I[ F ]inf (y).

If I[ T ]inf ( x ∗ y) ≤ I[ T ]inf ( x ), then

I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≥ I[ T ]sup ( x ) − I[ T ]inf ( x ) = I[ T ] ( x ).

If I[ T ]inf ( x ∗ y) ≤ I[ T ]inf (y), then

I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≥ I[ T ]sup (y) − I[ T ]inf (y) = I[ T ] (y).

It follows that I[ T ] ( x ∗ y) ≥ max{I[ T ] ( x ), I[ T ] (y)}. Therefore, ( X, I[ T ] ) is a 3-fuzzy


subalgebra of ( X, ∗, 0). Similarly, we can show that ( X, I[ I ] ) and ( X, I[ F ] ) are 3-fuzzy subalgebra of
( X, ∗, 0).

Corollary 5. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (i, 3), F (i, 3))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 1-fuzzy
subalgebra of ( X, ∗, 0).

42
Axioms 2018, 7, 23

Theorem 10. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (3, 4), I (3, 4),
F (3, 4))-interval neutrosophic subalgebra of ( X, ∗, 0), then ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 4-fuzzy
subalgebra of ( X, ∗, 0).

Proof. Let I := (I[ T ], I[ I ], I[ F ]) be a ( T (3, 4), I (3, 4), F (3, 4))-interval neutrosophic subalgebra of
( X, ∗, 0). Then, ( X, I[ T ]inf ), ( X, I[ I ]inf ) and ( X, I[ F ]inf ) are 3-fuzzy subalgebra of X, and ( X, I[ T ]sup ),
( X, I[ I ]sup ) and ( X, I[ F ]sup ) are 4-fuzzy subalgebra of X. Thus,

I[ T ]inf ( x ∗ y) ≥ max{I[ T ]inf ( x ), I[ T ]inf (y)},


I[ I ]inf ( x ∗ y) ≥ max{I[ I ]inf ( x ), I[ I ]inf (y)},
I[ F ]inf ( x ∗ y) ≥ max{I[ F ]inf ( x ), I[ F ]inf (y)},

and

I[ T ]sup ( x ∗ y) ≤ max{I[ T ]sup ( x ), I[ T ]sup (y)},


I[ I ]sup ( x ∗ y) ≤ max{I[ I ]sup ( x ), I[ I ]sup (y)}, (6)
I[ F ]sup ( x ∗ y) ≤ max{I[ F ]sup ( x ), I[ F ]sup (y)},

for all x, y ∈ X. It follows from Label (6) that

I[ T ]sup ( x ∗ y) ≤ I[ T ]sup ( x ) or I[ T ]sup ( x ∗ y) ≤ I[ T ]sup (y),


I[ I ]sup ( x ∗ y) ≤ I[ I ]sup ( x ) or I[ I ]sup ( x ∗ y) ≤ I[ I ]sup (y),
I[ F ]sup ( x ∗ y) ≤ I[ F ]sup ( x ) or I[ F ]sup ( x ∗ y) ≤ I[ F ]sup (y).

Assume that I[ T ]sup ( x ∗ y) ≤ I[ T ]sup ( x ). Then,

I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≤ I[ T ]sup ( x ) − I[ T ]inf ( x ) = I[ T ] ( x ).

If I[ T ]sup ( x ∗ y) ≤ I[ T ]sup (y), then

I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≤ I[ T ]sup (y) − I[ T ]inf (y) = I[ T ] (y).

Hence, I[ T ] ( x ∗ y) ≤ max{I[ T ] ( x ), I[ T ] (y)} for all x, y ∈ X. By a similar way, we can


prove that
I[ I ] ( x ∗ y) ≤ max{I[ I ] ( x ), I[ I ] (y)}
and
I[ F ] ( x ∗ y) ≤ max{I[ F ] ( x ), I[ F ] (y)}
for all x, y ∈ X. Therefore, ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 4-fuzzy subalgebra of ( X, ∗, 0).

Theorem 11. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (3, 2), I (3, 2),
F (3, 2))-interval neutrosophic subalgebra of ( X, ∗, 0), then ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 2-fuzzy
subalgebra of ( X, ∗, 0).

Proof. Assume that I := (I[ T ], I[ I ], I[ F ]) is a ( T (3, 2), I (3, 2), F (3, 2))-interval neutrosophic
subalgebra of ( X, ∗, 0). Then, ( X, I[ T ]inf ), ( X, I[ I ]inf ) and ( X, I[ F ]inf ) are 3-fuzzy subalgebra of X, and
( X, I[ T ]sup ), ( X, I[ I ]sup ) and ( X, I[ F ]sup ) are 2-fuzzy subalgebra of X. Hence,

I[ T ]inf ( x ∗ y) ≥ max{I[ T ]inf ( x ), I[ T ]inf (y)},


I[ I ]inf ( x ∗ y) ≥ max{I[ I ]inf ( x ), I[ I ]inf (y)},
I[ F ]inf ( x ∗ y) ≥ max{I[ F ]inf ( x ), I[ F ]inf (y)},

43
Axioms 2018, 7, 23

and

I[ T ]sup ( x ∗ y) ≤ min{I[ T ]sup ( x ), I[ T ]sup (y)},


I[ I ]sup ( x ∗ y) ≤ min{I[ I ]sup ( x ), I[ I ]sup (y)},
I[ F ]sup ( x ∗ y) ≤ min{I[ F ]sup ( x ), I[ F ]sup (y)},

for all x, y ∈ X, which imply that

I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≤ I[ T ]sup ( x ) − I[ T ]inf ( x ) = I[ T ] ( x ),


I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≤ I[ T ]sup (y) − I[ T ]inf (y) = I[ T ] (y),
I[ I ] ( x ∗ y) = I[ I ]sup ( x ∗ y) − I[ I ]inf ( x ∗ y) ≤ I[ I ]sup ( x ) − I[ I ]inf ( x ) = I[ I ] ( x ),
I[ I ] ( x ∗ y) = I[ I ]sup ( x ∗ y) − I[ I ]inf ( x ∗ y) ≤ I[ I ]sup (y) − I[ I ]inf (y) = I[ I ] (y),

and

I[ F ] ( x ∗ y) = I[ F ]sup ( x ∗ y) − I[ F ]inf ( x ∗ y) ≤ I[ F ]sup ( x ) − I[ F ]inf ( x ) = I[ F ] ( x ),


I[ F ] ( x ∗ y) = I[ F ]sup ( x ∗ y) − I[ F ]inf ( x ∗ y) ≤ I[ F ]sup (y) − I[ F ]inf (y) = I[ F ] (y).

It follows that
I[ T ] ( x ∗ y) ≤ min{I[ T ] ( x ), I[ T ] (y)},
I[ I ] ( x ∗ y) ≤ min{I[ I ] ( x ), I[ I ] (y)},
and
I[ F ] ( x ∗ y) ≤ min{I[ F ] ( x ), I[ F ] (y)},
for all x, y ∈ X. Hence, ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 2-fuzzy subalgebra of ( X, ∗, 0).

Corollary 6. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (3, 2), I (3, 2),


F (3, 2))-interval neutrosophic subalgebra of ( X, ∗, 0), then ( X, I[ T ] ), ( X, I[ I ] ) and ( X, I[ F ] ) are 4-fuzzy
subalgebra of ( X, ∗, 0).

Theorem 12. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (3, 4),
F (3, 2))-interval neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then

(1) ( X, I[ T ] ) is a 3-fuzzy subalgebra of ( X, ∗, 0).


(2) ( X, I[ I ] ) is a 4-fuzzy subalgebra of ( X, ∗, 0).
(3) ( X, I[ F ] ) is a 2-fuzzy subalgebra of ( X, ∗, 0).

Proof. Assume that I := (I[ T ], I[ I ], I[ F ]) is a ( T (4, 3), I (3, 4), F (3, 2))-interval neutrosophic
subalgebra of ( X, ∗, 0). Then, ( X, I[ T ]inf ) is a 4-fuzzy subalgebra of X, ( X, I[ T ]sup ) is a 3-fuzzy
subalgebra of X, ( X, I[ I ]inf ) is a 3-fuzzy subalgebra of X, ( X, I[ I ]sup ) is a 4-fuzzy subalgebra of X,
( X, I[ F ]inf ) is a 3-fuzzy subalgebra of X, and ( X, I[ F ]sup ) is a 2-fuzzy subalgebra of X. Hence,

I[ T ]inf ( x ∗ y) ≤ max{I[ T ]inf ( x ), I[ T ]inf (y)}, (7)

I[ T ]sup ( x ∗ y) ≥ max{I[ T ]sup ( x ), I[ T ]sup (y)}, (8)

I[ I ]inf ( x ∗ y) ≥ max{I[ I ]inf ( x ), I[ I ]inf (y)}, (9)

I[ I ]sup ( x ∗ y) ≤ max{I[ I ]sup ( x ), I[ I ]sup (y)}, (10)

I[ F ]inf ( x ∗ y) ≥ max{I[ F ]inf ( x ), I[ F ]inf (y)}, (11)

44
Axioms 2018, 7, 23

and
I[ F ]sup ( x ∗ y) ≤ min{I[ F ]sup ( x ), I[ F ]sup (y)}, (12)

for all x, y ∈ X. Then,

I[ T ]inf ( x ∗ y) ≤ I[ T ]inf ( x ) or I[ T ]inf ( x ∗ y) ≤ I[ T ]inf (y)

by Label (7). It follows from Label (8) that

I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≥ I[ T ]sup ( x ) − I[ T ]inf ( x ) = I[ T ] ( x )

or
I[ T ] ( x ∗ y) = I[ T ]sup ( x ∗ y) − I[ T ]inf ( x ∗ y) ≥ I[ T ]sup (y) − I[ T ]inf (y) = I[ T ] (y),
and so that I[ T ] ( x ∗ y) ≥ max{I[ T ] ( x ), I[ T ] (y)} for all x, y ∈ X. Thus, ( X, I[ T ] ) is a 3-fuzzy
subalgebra of ( X, ∗, 0). The condition (10) implies that

I[ I ]sup ( x ∗ y) ≤ I[ I ]sup ( x ) or I[ I ]sup ( x ∗ y) ≤ I[ I ]sup (y). (13)

Combining Labels (9) and (13), we have

I[ I ] ( x ∗ y) = I[ I ]sup ( x ∗ y) − I[ I ]inf ( x ∗ y) ≤ I[ I ]sup ( x ) − I[ I ]inf ( x ) = I[ I ] ( x )

or
I[ I ] ( x ∗ y) = I[ I ]sup ( x ∗ y) − I[ I ]inf ( x ∗ y) ≤ I[ I ]sup (y) − I[ I ]inf (y) = I[ I ] (y).
It follows that I[ I ] ( x ∗ y) ≤ max{I[ I ] ( x ), I[ I ] (y)} for all x, y ∈ X. Thus, ( X, I[ I ] ) is a 4-fuzzy
subalgebra of ( X, ∗, 0). Using Labels (11) and (12), we have

I[ F ] ( x ∗ y) = I[ F ]sup ( x ∗ y) − I[ F ]inf ( x ∗ y) ≤ I[ F ]sup ( x ) − I[ F ]inf ( x ) = I[ F ] ( x )

and
I[ F ] ( x ∗ y) = I[ F ]sup ( x ∗ y) − I[ F ]inf ( x ∗ y) ≤ I[ F ]sup (y) − I[ F ]inf (y) = I[ F ] (y),
and so I[ F ] ( x ∗ y) ≤ min{I[ F ] ( x ), I[ F ] (y)} for all x, y ∈ X. Therefore, ( X, I[ F ] ) is a 2-fuzzy
subalgebra of ( X, ∗, 0). Similarly, we can prove the desired results for i = 2.

Corollary 7. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (3, 4), F (3, 2))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then

(1) ( X, I[ T ] ) is a 1-fuzzy subalgebra of ( X, ∗, 0).


(2) ( X, I[ I ] ) and ( X, I[ F ] ) are 4-fuzzy subalgebra of ( X, ∗, 0).

By a similar way to the proof of Theorem 12, we have the following theorems.

Theorem 13. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (3, 2),
F (3, 2))-interval neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then

(1) ( X, I[ T ] ) is a 3-fuzzy subalgebra of ( X, ∗, 0).


(2) ( X, I[ I ] ) and ( X, I[ F ] ) are 2-fuzzy subalgebra of ( X, ∗, 0).

Corollary 8. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (3, 2), F (3, 2))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then

(1) ( X, I[ T ] ) is a 1-fuzzy subalgebra of ( X, ∗, 0).


(2) ( X, I[ I ] ) and ( X, I[ F ] ) are 4-fuzzy subalgebra of ( X, ∗, 0).

45
Axioms 2018, 7, 23

Theorem 14. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (3, 2),
F (2, 3))-interval neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then

(1) ( X, I[ T ] ) and ( X, I[ F ] ) are 3-fuzzy subalgebra of ( X, ∗, 0).


(2) ( X, I[ I ] ) is a 2-fuzzy subalgebra of ( X, ∗, 0).

Corollary 9. If an interval neutrosophic set I := (I[ T ], I[ I ], I[ F ]) in X is a ( T (i, 3), I (3, 2), F (2, 3))-interval
neutrosophic subalgebra of ( X, ∗, 0) for i ∈ {2, 4}, then

(1) ( X, I[ T ] ) and ( X, I[ F ] ) are 1-fuzzy subalgebra of ( X, ∗, 0).


(2) ( X, I[ I ] ) is a 4-fuzzy subalgebra of ( X, ∗, 0).

Acknowledgments: The authors wish to thank the anonymous reviewers for their valuable suggestions.
Author Contributions: Young Bae Jun conceived and designed the main idea and wrote the paper, Seon Jeong
Kim and Florentin Smarandache performed the idea, checking contents and finding examples. All authors have
read and approved the final manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96.
2. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353.
3. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Interval Neutrosophic Sets and Logic: Theory and
Applications in Computing; Neutrosophic Book Series No. 5; Hexis: Phoenix, AZ, USA, 2005.
4. Smarandache, F. A Unifying Field in Logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set, Neutrosophic
Probability; American Reserch Press: Rehoboth, NM, USA, 1999.
5. Smarandache, F. Neutrosophic set-a generalization of the intuitionistic fuzzy set. Int. J. Pure Appl. Math.
2005, 24, 287–297.
6. Wang, H.; Zhang, Y.; Sunderraman, R. Truth-value based interval neutrosophic sets. In Proceedings of the
2005 IEEE International Conference on Granular Computing, Beijing, China, 25–27 July 2005; Volume 1,
pp. 274–277. doi:10.1109/GRC.2005.1547284.
7. Imai, Y.; Iséki, K. On axiom systems of propositional calculi. Proc. Jpn. Acad. 1966, 42, 19–21.
8. Iséki, K. An algebra related with a propositional calculus. Proc. Jpn. Acad. 1966, 42, 26–29.
9. Huang, Y.S. BCI-Algebra; Science Press: Beijing, China, 2006.
10. Meng, J.; Jun, Y.B. BCK-Algebra; Kyungmoon Sa Co.: Seoul, Korea, 1994.
11. Jun, Y.B.; Hur, K.; Lee, K.J. Hyperfuzzy subalgebra of BCK/BCI-algebra. Ann. Fuzzy Math. Inf. 2018,
15, 17–28.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

46
axioms
Article
Cross Entropy Measures of Bipolar and Interval
Bipolar Neutrosophic Sets and Their Application for
Multi-Attribute Decision-Making
Surapati Pramanik 1, , Partha Pratim Dey 2 , Florentin Smarandache 3 and Jun Ye 4
1 Department of Mathematics, Nandalal Ghosh B.T. College, Panpur, P.O.-Narayanpur,
District–North 24 Parganas, West Bengal 743126, India
2 Department of Mathematics, Patipukur Pallisree Vidyapith, Patipukur, Kolkata 700048, India;
[email protected]
3 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
4 Department of Electrical and Information Engineering, Shaoxing University, 508 Huancheng West Road,
Shaoxing 312000, China; [email protected]
* Correspondence: [email protected]; Tel.: +91-947-703-5544

Received: 7 January 2018; Accepted: 22 March 2018; Published: 24 March 2018

Abstract: The bipolar neutrosophic set is an important extension of the bipolar fuzzy set. The bipolar
neutrosophic set is a hybridization of the bipolar fuzzy set and neutrosophic set. Every element of
a bipolar neutrosophic set consists of three independent positive membership functions and three
independent negative membership functions. In this paper, we develop cross entropy measures of
bipolar neutrosophic sets and prove their basic properties. We also define cross entropy measures of
interval bipolar neutrosophic sets and prove their basic properties. Thereafter, we develop two novel
multi-attribute decision-making strategies based on the proposed cross entropy measures. In the
decision-making framework, we calculate the weighted cross entropy measures between each
alternative and the ideal alternative to rank the alternatives and choose the best one. We solve
two illustrative examples of multi-attribute decision-making problems and compare the obtained
result with the results of other existing strategies to show the applicability and effectiveness of the
developed strategies. At the end, the main conclusion and future scope of research are summarized.

Keywords: neutrosophic set; bipolar neutrosophic set; interval bipolar neutrosophic set; multi-attribute
decision-making; cross entropy measure

1. Introduction
Shannon and Weaver [1] and Shannon [2] proposed the entropy measure which dealt formally
with communication systems at its inception. According to Shannon and Weaver [1] and Shannon [2],
the entropy measure is an important decision-making apparatus for computing uncertain information.
Shannon [2] introduced the concept of the cross entropy strategy in information theory.
The measure of a quantity of fuzzy information obtained from a fuzzy set or fuzzy system is
termed fuzzy entropy. However, the meaning of fuzzy entropy is quite different from the classical
Shannon entropy because it is defined based on a nonprobabilistic concept [3–5], while Shannon
entropy is defined based on a randomness (probabilistic) concept. In 1968, Zadeh [6] extended
the Shannon entropy to fuzzy entropy on a fuzzy subset with respect to the concerned probability
distribution. In 1972, De Luca and Termini [7] proposed fuzzy entropy based on Shannon’s function
and introduced the axioms with which the fuzzy entropy should comply. Sander [8] presented Shannon
fuzzy entropy and proved that the properties sharpness, valuation, and general additivity have to
be imposed on fuzzy entropy. Xie and Bedrosian [9] proposed another form of total fuzzy entropy.

Axioms 2018, 7, 21; doi:10.3390/axioms7020021 47 www.mdpi.com/journal/axioms


Axioms 2018, 7, 21

To overcome the drawbacks of total entropy [8,9], Pal and Pal [10] introduced hybrid entropy that can
be used as an objective measure for a proper defuzzification of a certain fuzzy set. Hybrid entropy [10]
considers both probabilistic entropies in the absence of fuzziness. In the same study, Pal and Pal [10]
defined higher-order entropy. Kaufmann and Gupta [11] studied the degree of fuzziness of a fuzzy set
by a metric distance between its membership function and the membership function (characteristic
function) of its nearest crisp set. Yager [12,13] introduced a fuzzy entropy card as a fuzziness measure
by observing that the intersection of a fuzzy set and its complement is not the void set. Kosko [14,15]
studied the fuzzy entropy of a fuzzy set based on the fuzzy set geometry and distances between them.
Parkash et al. [16] proposed two new measures of weighted fuzzy entropy.
Burillo and Bustince [17] presented an axiomatic definition of an intuitionistic fuzzy entropy
measure. Szmidt and Kacprzyk [18] developed a new entropy measure based on a geometric
interpretation of the intuitionistic fuzzy set (IFS). Wei et al. [19] proposed an entropy measure for
interval-valued intuitionistic fuzzy sets (IVIFSs) and employed it in pattern recognition and multi
criteria decision-making (MCDM). Li [20] presented a new multi-attribute decision-making (MADM)
strategy combining entropy and technique for order of preference by similarity to ideal solution
(TOPSIS) in the IVIFS environment.
Shang and Jiang [21] developed cross entropy in the fuzzy environment. Vlachos and Sergiadis [22]
presented intuitionistic fuzzy cross entropy by extending fuzzy cross entropy [21]. Ye [23] proposed
a new cross entropy in the IVIFS environment and developed an optimal decision-making strategy.
Xia and Xu [24] defined a new entropy and a cross entropy and presented multi-attribute group
decision-making (MAGDM) strategy in the IFS environment. Tong and Yu [25] defined cross entropy
in the IVIFS environment and employed it to solve MADM problems.
Smarandache [26] introduced the neutrosophic set, which is a generalization of the fuzzy set [27] and
intuitionistic fuzzy set [28]. The single-valued neutrosophic set (SVNS) [29], an instance of the neutrosophic
set, has caught the attention of researchers due to its applicability in decision-making [30–61], conflict
resolution [62], educational problems [63,64], image processing [65–67], cluster analysis [68,69], social
problems [70,71], etc.
Majumdar and Samanta [72] proposed an entropy measure and presented an MCDM strategy
in the SVNS environment. Ye [73] defined cross entropy for SVNS and proposed an MCDM strategy
which bears undefined phenomena. To overcome the undefined phenomena, Ye [74] defined improved
cross entropy measures for SVNSs and interval neutrosophic sets (INSs) [75], which are straightforward
symmetric, and employed them to solve MADM problems. Since MADM strategies [73,74] are suitable for
single-decision-maker-oriented problems, Pramanik et al. [76] defined NS-cross entropy and developed
an MAGDM strategy which is straightforward symmetric and free from undefined phenomena and
suitable for group decision making problem. Şahin [77] proposed two techniques to convert the
interval neutrosophic information to single-valued neutrosophic information and fuzzy information.
In the same study, Şahin [77] defined an interval neutrosophic cross entropy measure by utilizing
two reduction methods and an MCDM strategy. Tian et al. [78] developed a transformation operator
to convert interval neutrosophic numbers to single-valued neutrosophic numbers and defined cross
entropy measures for two SVNSs. In the same study, Tian et al. [78] developed an MCDM strategy
based on cross entropy and TOPSIS [79] where the weight of the criterion is incomplete. Tian et al. [78]
defined a cross entropy for INSs and developed an MCDM strategy based on the cross entropy and
TOPSIS. The MCDM strategies proposed by Sahin [77] and Tian et al. [78] are applicable for a single
decision maker only. Therefore, multiple decision-makers cannot participate in the strategies in [77,78].
To tackle the problem, Dalapati et al. [80] proposed IN-cross entropy and weighted IN-cross entropy
and developed an MAGDM strategy.
Deli et al. [81] proposed bipolar neutrosophic set (BNS) by hybridizing the concept of bipolar fuzzy
sets [82,83] and neutrosophic sets [26]. A BNS has two fully independent parts, which are positive
membership degree T+ Ñ [0, 1], I+ Ñ [0, 1], F+ Ñ [0, 1], and negative membership degree T´ Ñ [´1, 0],
I´ Ñ [´1, 0], F´ Ñ [´1, 0], where the positive membership degrees T+ , I+ , F+ represent truth

48
Axioms 2018, 7, 21

membership degree, indeterminacy membership degree, and false membership degree, respectively,
of an element and the negative membership degrees T´ , I´ , F´ represent truth membership degree,
indeterminacy membership degree, and false membership degree, respectively, of an element to some
implicit counter property corresponding to a BNS. Deli et al. [81] defined some operations, namely,
score, accuracy, and certainty functions, to compare BNSs and provided some operators in order to
aggregate BNSs. Deli and Subas [84] defined a correlation coefficient similarity measure for dealing
with MCDM problems in a single-valued bipolar neutrosophic setting. Şahin et al. [85] proposed
a Jaccard vector similarity measure for MCDM problems with single-valued neutrosophic information.
Uluçay et al. [86] introduced a Dice similarity measure, weighted Dice similarity measure, hybrid vector
similarity measure, and weighted hybrid vector similarity measure for BNSs and established an MCDM
strategy. Dey et al. [87] investigated a TOPSIS strategy for solving multi-attribute decision-making
(MADM) problems with bipolar neutrosophic information where the weights of the attributes are
completely unknown to the decision-maker. Pramanik et al. [88] defined projection, bidirectional
projection, and hybrid projection measures for BNSs and proved their basic properties. In the same
study, Pramanik et al. [88] developed three new MADM strategies based on the proposed projection,
bidirectional projection, and hybrid projection measures with bipolar neutrosophic information.
Wang et al. [89] defined Frank operations of bipolar neutrosophic numbers (BNNs) and proposed
Frank bipolar neutrosophic Choquet Bonferroni mean operators by combining Choquet integral
operators and Bonferroni mean operators based on Frank operations of BNNs. In the same study,
Wang et al. [89] established an MCDM strategy based on Frank Choquet Bonferroni operators of BNNs
in a bipolar neutrosophic environment. Pramanik et al. [90] developed a Tomada de decisao interativa
e multicritévio (TODIM) strategy for MAGDM in a bipolar neutrosophic environment. An MADM
strategy based on cross entropy for BNSs is yet to appear in the literature.
Mahmood et al. [91] and Deli et al. [92] introduced the hybridized structure called interval bipolar
neutrosophic sets (IBNSs) by combining BNSs and INSs and defined some operations and operators
for IBNSs. An MADM strategy based on cross entropy for IBNSs is yet to appear in the literature.
Research gap:
An MADM strategy based on cross entropy for BNSs and an MADM strategy based on cross
entropy for IBNSs.
This paper answers the following research questions:

i. Is it possible to define a new cross entropy measure for BNSs?


ii. Is it possible to define a new weighted cross entropy measure for BNSs?
iii. Is it possible to develop a new MADM strategy based on the proposed cross entropy measure
in a BNS environment?
iv. Is it possible to develop a new MADM strategy based on the proposed weighted cross entropy
measure in a BNS environment?
v. Is it possible to define a new cross entropy measure for IBNSs?
vi. Is it possible to define a new weighted cross entropy measure for IBNSs?
vii. Is it possible to develop a new MADM strategy based on the proposed cross entropy measure
in an IBNS environment?
viii. Is it possible to develop a new MADM strategy based on the proposed weighted cross entropy
measure in an IBNS environment?

Motivation:
The above-mentioned analysis presents the motivation behind proposing a cross-entropy-based
strategy for tackling MADM in BNS and IBNS environments. This study develops two novel
cross-entropy-based MADM strategies.
The objectives of the paper are:

49
Axioms 2018, 7, 21

1. To define a new cross entropy measure and prove its basic properties.
2. To define a new weighted cross measure and prove its basic properties.
3. To develop a new MADM strategy based on the weighted cross entropy measure in a BNS
environment.
4. To develop a new MADM strategy based on the weighted cross entropy measure in an IBNS
environment.

To fill the research gap, we propose a cross-entropy-based MADM strategy in the BNS
environment and the IBNS environment.
The main contributions of this paper are summarized below:

1. We propose a new cross entropy measure in the BNS environment and prove its basic properties.
2. We propose a new weighted cross entropy measure in the IBNS environment and prove its
basic properties.
3. We develop a new MADM strategy based on weighted cross entropy to solve MADM problems
in a BNS environment.
4. We develop a new MADM strategy based on weighted cross entropy to solve MADM problems
in an IBNS environment.
5. Two illustrative numerical examples are solved and a comparison analysis is provided.

The rest of the paper is organized as follows. In Section 2, we present some concepts regarding
SVNSs, INSs, BNSs, and IBNSs. Section 3 proposes cross entropy and weighted cross entropy measures
for BNSs and investigates their properties. In Section 4, we extend the cross entropy measures for BNSs
to cross entropy measures for IBNSs and discuss their basic properties. Two novel MADM strategies
based on the proposed cross entropy measures in bipolar and interval bipolar neutrosophic settings are
presented in Section 5. In Section 6, two numerical examples are solved and a comparison with other
existing methods is provided. In Section 7, conclusions and the scope of future work are provided.

2. Preliminary
In this section, we provide some basic definitions regarding SVNSs, INSs, BNSs, and IBNSs.

2.1. Single-Valued Neutrosophic Sets


An SVNS [29] S in U is characterized by a truth membership function TS pxq, an indeterminate
membership function IS pxq, and a falsity membership function FS pxq. An SVNS S over U is defined by

S “ tx, xTS pxq, IS pxq, FS pxqy|x P Uu

where, TS pxq, IS pxq, FS pxq: U Ñ [0, 1] and 0 ď TS pxq ` IS pxq ` FS pxq ď 3 for each point x P U.

2.2. Interval Neutrosophic Set


An interval neutrosophic set [75] P in U is expressed as given below:

P “ tx, xTP pxq, IP pxq, FP pxqy|x P Uu


“ ‰ “ ‰ “ ‰
“ tx, infTp pxq, supTp pxq ; infI p pxq, supI p pxq ; infFp pxq supFp pxq |x P Uu

where TP pxq, IP pxq, FP pxq are the truth membership function, indeterminacy membership function, and
falsity membership function, respectively. For each point x in U, TP pxq, IP pxq, FP pxq Ď [0, 1] satisfying
the condition 0 ď sup TP pxq + sup IP pxq + sup FP pxq ď 3.

50
Axioms 2018, 7, 21

2.3. Bipolar Neutrosophic Set


A BNS [81] E in U is presented as given below:

E “ tx, TE` pxq, IE` pxq, FE` pxq, TE´ pxq, IE´ pxq, FE´ pxq |x P Uu
@ D

where TE` pxq, IE` pxq, FE` pxq: U Ñ [0, 1] and TE´ pxq, IE´ pxq, FE´ pxq: U Ñ [´1, 0]. Here, TE` pxq, IE` pxq,
FE` pxq denote the truth membership, indeterminate membership, and falsity membership functions
corresponding to BNS E on an element x P U, and TE´ pxq, IE´ pxq, FE´ pxq denote the truth membership,
indeterminate membership, and falsity membership of an element x P U to some implicit counter
property corresponding to E.
A E
Definition 1. Ref. [81]: Let, E1 = {x, TE`1 pxq, IE`1 pxq, FE`1 pxq, TE´1 pxq, IE´1 pxq, FE´1 pxq |x P U} and
A E
E2 = {x, TE`2 pxq, IE`2 pxq, FE`2 pxq, TE´2 pxq, IE´2 pxq, FE´2 pxq |x P X} be any two BNSs. Then

‚ E1 Ď E2 if, and only if,

TE`1 pxq ď TE`2 pxq, IE`1 pxq ď IE`2 pxq, FE`1 pxq ě FE`2 pxq; TE´1 pxq ě TE´2 pxq, IE´1 pxq ě IE´2 pxq, FE´1 pxq ď FE´2 pxq
for all x P U.

‚ E1 = E2 if, and only if,

TE`1 pxq = TE`2 pxq, IE`1 pxq = IE`2 pxq, FE`1 pxq = FE`2 pxq; TE´1 pxq = TE´2 pxq, IE´1 pxq = IE´2 pxq, FE´1 pxq = FE´2 pxq
for all x P U.

The complement of E is Ec = {x, TE`c pxq, IE`c pxq, FE`c pxq, TE´c pxq, IE´c pxq, FE´c pxq |x P U}
@ D

where
TE`c pxq “ FE` pxq, IE`c pxq “ 1 ´ IE` pxq, FE`c pxq “ TE` pxq;

TE´c pxq “ FE´ pxq, IE´c pxq “ ´1 ´ IE´ pxq, FE´c pxq “ TE´ pxq.

‚ The union E1 Y E2 is defined as follows:

E1 Y E2 = {Max (TE`1 pxq, TE`2 pxq), Min (IE`1 pxq, IE`2 pxq), Min (FE`1 pxq, FE`2 pxq), Min (TE´1 pxq, TE´2 pxq),
Max (IE´1 pxq, IE´2 pxq), Max (FE´1 pxq, FE´2 pxq)}, @ x P U.

‚ The intersection E1 X E2 [88] is defined as follows:

E1 X E2 = {Min ( TE`1 pxq, TE`2 pxq), Max (IE`1 pxq, IE`2 pxq), Max (FE`1 pxq, FE`2 pxq), Max (TE´1 pxq, TE´2 pxq),
Min (IE´1 pxq, IE´2 pxq), Min (FE´1 pxq, FE´2 pxq)}, @ x P U.

2.4. Interval Bipolar Neutrosophic Sets


An IBNS [91,92] R = {x, <[infTR` (x), supTR` (x)]; [infIR` (x), supIR` (x)]; [infFR` (x), supFR` (x)];
[infTR´ (x), supTR´ (x)]; [infIR´ (x), supIR´ (x)]; [infFR´ (x), supFR´ (x)]>|x P U} is characterized by
positive and negative truth membership functions TR` (x), TR´ (x), respectively; positive and negative
indeterminacy membership functions IR` (x), IR´ (x), respectively; and positive and negative falsity
membership functions FR` (x), FR´ (x), respectively. Here, for any x P U, TR` (x), IR` (x), FR` (x) Ď [0, 1]
and TR´ (x), IR´ (x), FR´ (x) Ď [´1, 0] with the conditions 0 ď supTR` (x) + supIR` (x) + supFR` (x) ď 3 and
´3 ď supTR´ (x) + supIR´ (x) + supFR´ (x) ď 0.

Definition 2. Ref. [91,92]: Let R = {x, <[inf TR` (x), supTR` (x)]; [inf IR` (x), supIR` (x)]; [inf FR` (x), supFR` (x)];
[inf TR´ (x), supTR´ (x)]; [inf IR´ (x), supIR´ (x)]; [inf FR´ (x), supFR´ (x)]>|x P U} and S = {x, <[inf TS` (x),
supTS` (x)]; [inf IS` (x), supIS` (x)]; [inf FS` (x), supFS` (x)]; [inf TS´ (x), supTS´ (x)]; [inf IS´ (x), supIS´ (x)];
[inf FS´ (x), supFS´ (x)]>|x P U} be two IBNSs in U. Then

51
Axioms 2018, 7, 21

‚ R Ď S if, and only if,


inf TR` (x) ď inf TS` (x), supTR` (x) ď supTS` (x),
inf IR` (x) ě inf IS` (x), supIR` (x) ě supIS` (x),
inf FR` (x) ě inf FS` (x), supFR` (x) ě supFS` (x),
inf TR´ (x) ě inf TS´ (x), supTR´ (x) ě supTS´ (x),
inf IR´ (x) ď inf IS´ (x), supIR´ (x) ď supIS´ (x),
inf FR´ (x) ď inf FS´ (x), supFR´ (x) ď supFS´ (x),
for all x P U.
‚ R = S if, and only if,

inf TR` (x) = inf TS` (x), supTR` (x) = supTS` (x), inf IR` (x) = inf IS` (x), supIR` (x) = supIS` (x),
inf FR` (x) = inf FS` (x), supFR` (x) = supFS` (x), inf TR´ (x) = inf TS´ (x), supTR´ (x) = supTS´ (x),
inf IR´ (x) = inf IS´ (x), supIR´ (x) = supIS´ (x), inf FR´ (x) = inf FS´ (x), supFR´ (x) = supFS´ (x),
for all x P U.
‚ The complement of R is defined as The complement of R is defined as RC = {x, < [inf TR`C (x),
supTR`C (x)]; [infIR`C (x), supIR`C (x)]; [infFR`C (x), supFR`C (x)]; [infTR´C (x), supTR´C (x)]; [infIR´C (x),
supIR´C (x)]; [inf FR´C (x), supFR´C (x)] > | xP U} where

inf TR`C (x) = infFR` (x), supTR`C (x) = supFR` (x)

inf IR`C (x) = 1 ´ supIR` (x), supIR`C (x) = 1 ´ infIR` (x)

infFR`C (x) = infTR` , supFR`C (x) = supTR`

infTR´C (x) = infFR´ , supTR´C (x) = supFR´

infIR´C (x) = ´1 ´ supIR´ (x), supIR´C (x) = ´1 ´ infIR´ (x)

infFR´C (x) = infTR´ (x), supFR´C (x) = supTR´ (x)


for all xP U.

3. Cross Entropy Measures of Bipolar Neutrosophic Sets


In this section we define a cross entropy measure between two BNSs and establish some of its
basic properties.

Definition 3. For any two BNSs M and N in U, the cross entropy measure can be defined as follows.
¨c ˛ c ¨c ˛
` ` ` `
» c b b
T p x i q` T p x i q I p x q` I p x i q

` ` ` `
T pxi q`T pxi q
M N ´˝
M N ‚` I M pxi q`IN pxi q ´ ˝ M i N ‚`

— 2 2 2 2 ffi

— ¨ c´ ffi
¯ c´ ¯˛ ¨c ˛
` ` ` `
— c´ c
` ` 1´I p xi q ` 1´I pxi q
b ffi
1´I pxi q ` 1´I pxi q ` ` F p x q` F p x i q
¯ ´ ¯
‚` FM pxi q`FN pxi q ´ ˝ M i
— ffi
M N N
M N
´˝ ‚`
— ffi

n — 2 2 2 2 ffi

(1)
ÿ
CB pM, Nq “
— ¨ ˛ ¨ ffi
c´ c´ c´ c´ ¯˛
´ ´ ´ ´
— c ´ c
´ ´ ´T pxi q ` ´T pxi q ´ ´ ´I pxi q ` ´I pxi q
¯ ¯ ¯ ffi
´ T pxi q`T p xi q ´ I pxi q`I pxi q
¯ ´ ¯
— ffi
i“1— M N M N
M N
´˝ ‚` M N
´˝ ‚` ffi

— 2 2 2 2 ffi

— ¨ ˛ ¨ ffi
c c´ c´ ¯˛
´ ´ ´ ´
— c´ c
´ ´ ´ ´ ´F pxi q ` ´F p xi q
b ffi
1`I p xi q` 1`I p xi q
¯
1`I pxi q ` 1`I pxi q ´ F pxi q`F p xi q
¯ ´ ¯ ´ ¯
— ffi
M N M N
M N
´˝ ‚` M N
´˝
– ‚ fl
2 2 2 2

52
Axioms 2018, 7, 21

` ` ` ´ ´ ´ ` ` `
Theorem 1. If M = <TM pxi q, I M pxi q, FM pxi q, TM pxi q, I M pxi q, FM pxi q> and N <TN pxi q, IN pxi q, FN pxi q,
´ ´ ´
TN pxi q, IN pxi q, FN pxi q> are two BNSs in U, then the cross entropy measure CB (M, N) satisfies the
following properties:

(1) CB (M, N) ě 0;
` ` ` ` ` ` ´ ´
(2) CB (M, N) = 0 if, and only if, TM pxi q = TN pxi q, I M pxi q = IN pxi q, FM pxi q = FN pxi q, TM pxi q = TN pxi q,
´ ´ ´ ´
IM pxi q = IN pxi q, FM pxi q = FN pxi q, @ x P U;
(3) CB (M, N) = CB (N, M);
(4) CB (M, N) = CB (MC , NC ).

Proof
´ ¯1 1 1
b 2
(1) We have the inequality a` 2 ě a2 `
2
b2
for all positive numbers a and b. From the inequality
we can easily obtain CB (M, N) ě 0.
´ ¯1 1 1 ´ ¯1 1 1
b 2 b 2
(2) The inequality a` 2 ě a2 ` 2
b2
becomes the equality a` 2 “ a2 `
2
b2
if, and only if, a = b and
` ` ` ` ` `
therefore CB (M, N) = 0 if, and only if, M = N, i.e., TM pxi q = TN pxi q, I M pxi q = IN pxi q, FM pxi q = FN pxi q,
´ ´ ´ ´ ´ ´
TM pxi q = TN pxi q, I M pxi q = IN pxi q, FM pxi q = FN pxi q @ x P U.
» c ˜b
` `
¸ c ˜b
` `
¸ fi
` ` ` `
b b
TM pxi q`TN pxi q TM pxi q` TN pxi q I M pxi q` IN pxi q I M pxi q` IN pxi q
2 ´ 2 ` 2 ´ 2 `
— ¨ c´ ffi
¯ c´ ¯˛
` `

` `
c ˜b ¸
` `
— ¯ ´ ¯
1´ I M pxi q ` 1´ I N p x i q ` `
b ffi
1´ I M pxi q ` 1´ IN pxi q FM pxi q` FN pxi q
´˝ ‚` FM pxi q`FN pxi q ´ `
— ffi
2 2 2 2
n —
— ffi
ř ffi
(3) CB (M, N) = —

c ´
´ ´
´ TM pxi q`TN pxi q
¯
¨ c´
´
¯ c´
´TM pxi q `
´
´TN pxi q
¯˛ c ´
´ ´
´ I M pxi q` IN pxi q
¯
¨ c´
´
¯ c´
´ I M p xi q `
´
´ I N p xi q
¯˛ ffi

i “1 — 2 ´˝ 2
‚`
2 ´˝ 2
‚` ffi
— ffi
— ¨ c´ ¯ c´ ¯˛ ffi
´ ´
c´ ¸ c ´
´ ´ ´ ´
˜b
´ ´ ´ FM pxi q ` ´ FN pxi q
¯ ´ ¯ b ¯
– 1` I M pxi q ` 1` IN pxi q 1` I M pxi q` 1` IN pxi q ´ FM pxi q` FN pxi q fl
2 ´ 2 ` 2 ´˝ 2

¨b c ˛ ¨b c ˛
`` ˘ `` ˘ `` ˘ `` ˘
» d fi
`` ˘ `` ˘
d
T x ` T x `` ˘ `` ˘ I x ` I x
T x `T x N i M i ‹ I x `I x N i M i ‹
N i M i ´˚ ‹` N i M i ´˚ ‹`
˚ ˚
2 ˝ 2 ‚ 2 ˝ 2 ‚
— ffi
— ¨ c´
`` ˘
¯ c´
`` ˘
¯˛ d ¨b
`` ˘
c
`` ˘
˛ ffi
`` ˘ `` ˘

x ` `` ˘ `` ˘
¯ ´ ¯
1´I x ` 1´I x 1´I
N i
1´I x
M i ‹ F x `F x ˚ FN xi ` FM xi ‹
— N i M i

´˚ ‹` N i M i ´˚ ‹`
˚
n —
— 2 ˝ 2 ‚ 2 ˝ 2 ‚

ř ffi
= — d
´` ˘ ´` ˘
¨ c´
´` ˘
¯ c´
´` ˘
¯˛ d ´
´` ˘ ´` ˘
¨ c´
´` ˘
¯ c´
´` ˘
¯˛ ffi = CB (N, M).
´T x ` ´T ´I x ` ´I
´ ¯ ¯
— ´ T x `T x x ´ I x `I x x ffi
i “1 — N i M i N i M i ‹ N i M i N i M i ‹
´˚ ‹` ´˚ ‹`
˚ ˚
2 ˝ 2 ‚ 2 ˝ 2 ‚

— ffi
— c ¨ c´ ¯ c´

´` ˘ ´` ˘
¨b ˛ d ¯˛
´` ˘ ´` ˘ ´` ˘ ´` ˘ ´` ˘ ´` ˘

´F x ` ´F
¯ ´ ¯ ´ ¯
1`I x ` 1`I x ˚ 1`I N xi ` 1`I M xi ‹ ´ F x `F x x
– N i M i ‹

N i M i ´˚ ‹` N i M i ´˚
˚

2 ˝ 2 ‚ 2 ˝ 2 ‚

(4) CB (MC , NC )
¨c ¨ c´
`` ˘ `` ˘
˛ d ˛
`` ˘ `` ˘
» d ` ¯ b fi
`` ˘ `` ˘
b
` `` ˘ x ` 1´I x q
´ ¯ ´ ¯
x `F ˚ FM xi ` FN xi ‹ 1´I x ` 1´I x 1´I
˘
F x M i N i M i N i ‹
M i N i ´˚ ‹` ´ ‹`
˚
˚
2 ˝ 2 ‚ 2 ˝ 2 ‚
— ffi
— ¨c
`` ˘
¯ c
`` ˘
¯˛ d ¨c
`` ˘ `` ˘
˛ ffi
`` ˘ `` ˘
d ´ ´ b
`` ˘ `` ˘
˚ 1´ 1´I M xi ` 1´ 1´I N xi ‹
´ ¯ ´ ¯
1´ 1´I x `1´ 1´I x T x `T x ˚ TM xi ` TN xi ‹
— M i N i

´ ‹` M i N i ´˚ ‹`
n —
— 2
˚
˝ 2 ‚ 2 ˝ 2 ‚

ř ffi
= — d
´` ˘ ´` ˘
¨ c´
´` ˘
¯ c´
´` ˘
¯˛ d ´
´` ˘ ´` ˘
¨c ´
´` ˘
¯ c ´
´` ˘
¯˛ ffi
´F x ` ´F ˚ ´ ´1´I M xi ` ´ ´1´I N xi ‹
´ ¯ ¯ ´ ¯
— ´ F x `F x x ´ ´1´I x ´ ´1´I x

i “1 — M i N i M i N i ‹ M i N i
´˚ ‹` ´˚ ‹`
˚
2 ˝ 2 ‚ 2 ˝ 2 ‚

— ffi
— ¨c ¯ c ¨ c´ ¯ c´

´` ˘ ´` ˘ ´` ˘ ´` ˘
¯˛ d ´ ¯˛
´` ˘ ´` ˘ ´` ˘ ´` ˘
d ´ ´
˚ 1` ´1´I M xi ` 1` ´1´I N xi ‹ ´T x ` ´T
´ ¯ ´ ¯ ¯
– 1` ´1´I x `1` ´1´I x ´ T x `T x x fl
M i N i M i N i M i N i ‹
´˚ ‹` ´˚
˚

2 ˝ 2 ‚ 2 ˝ 2 ‚

¨c ˛ d ¨c ˛
`` ˘ `` ˘ `` ˘ `` ˘
» d b b fi
`` ˘ `` ˘ `` ˘ `` ˘
T x `T x ˚ TM xi ` TN xi ‹ I x `I x ˚ I M xi ` I N xi ‹
M i N i ´˚ ‹` M i N i ´˚ ‹`
2 ˝ 2 ‚ 2 ˝ 2 ‚
— ffi
— ¨ c´
`` ˘
¯ c´
`` ˘
¯˛ d ¨c
`` ˘ `` ˘
˛ ffi
`` ˘ `` ˘
d´ b
x ` `` ˘ `` ˘
¯ ´ ¯
1´I x ` 1´I x 1´I
M i
1´I x
N i ‹ F x `F x ˚ FM xi ` FN xi ‹
— M i N i

´˚ ‹` M i N i ´˚ ‹`
˚
n —
— 2 ˝ 2 ‚ 2 ˝ 2 ‚

ř ffi
= — d
´` ˘ ´` ˘
¨ c´
´` ˘
¯ c´
´` ˘
¯˛ d ´
´` ˘ ´` ˘
¨ c´
´` ˘
¯ c´
´` ˘
¯˛ ffi = CB (M, N).
´T x ` ´T ´I x ` ´I
´ ¯ ¯
— ´ T x `T x x ´ I x `I x x ffi
i “1 — M i N i M i N i ‹ M i N i M i N i ‹
´˚ ‹` ´˚ ‹`
˚ ˚
2 ˝ 2 ‚ 2 ˝ 2 ‚

— ffi
— ¨c ¨ c´ ¯ c´

´` ˘ ´` ˘
˛ d ¯˛
´` ˘ ´` ˘ ´` ˘ ´` ˘ ´` ˘ ´` ˘
d´ b
´F x ` ´F
¯ ´ ¯ ´ ¯
1`I x ` 1`I x ˚ 1`I M xi ` 1`I N xi ‹ ´ F x `F x x
– M i N i ‹

M i N i ´˚ ‹` M i N i ´˚
˚

2 ˝ 2 ‚ 2 ˝ 2 ‚

The proof is completed. 

53
Axioms 2018, 7, 21

Example 1. Suppose that M = <0.7, 0.3, 0.4, ´0.3, ´0.5, ´0.1> and N = <0.5, 0.2, 0.5, ´0.3, ´0.3, ´0.2> are
two BNSs; then the cross entropy between M and N is calculated as follows:
» b ´? ? ¯ b ´? ? ¯ b ´? ? ¯ b fi
0.7`0.5
2 ´ 0.7` 0.5
2 ` 0.3`0.2
2 ´ 0.3` 0.2
2 ` p1´0.3q`p1´0.2q ´ 1´0.3` 1´0.2
` 0.4`0.5 ´
— ?
— ´
? ¯ b ˆ ? ? ˙ b 2 ˆ? 2 ? ˙2 ffi
´p´0.3´0.3q ´p´0.3q` ´p´0.3q ´p´0.5´0.3q ´p´0.5q` ´p´0.3q

0.4` 0.5
CB pM, Nq “ —

2 ` 2 ´ 2 ` 2 ´ 2 ffi “ 0.01738474.

— b ˆ? ? ˙ b ˆ? ? ˙ ffi
1´0.5` r1´0.3s ´p´0.1q` ´p´0.2q
` p1´0.5q`p1´0.3q ` ´p´0.1´0.2q
– fl
2 ´ 2 2 ´ 2

n
ř
Definition 4. Suppose that wi is the weight of each element xi , i = 1, 2, ..., n, where wi P [0, 1] and wi = 1;
i “1
then the weighted cross entropy measure between any two BNSs M and N in U can be defined as follows.
¨c ˛ c ¨c ˛
` ` ` `
» c b b fi
` ` ` `
T pxi q`T pxi q ˚ TM pxi q` TN pxi q ‹ I pxi q`I p xi q ˚ I M p x i q` I N p x i q ‹


M
2
N ´˝ 2 ‚` M
2
N ´˝ 2 ‚` ffi

— ffi
— ¨c ˛ c ¨c ˛ ffi
` ` ` `
— c´ ffi
` `
b b
` `
˚ 1´I M pxi q` 1´IN pxi q ‹ ˚ FM pxi q` FN pxi q ‹
¯ ´ ¯
1´I pxi q ` 1´I p xi q F pxi q`F pxi q
— ffi
M N
´˝ M N ´˝ ‚`
— ffi
— ffi
2 2 ‚` 2 2
n
— ffi
(2)
ÿ — ffi
CB pM, Nqw “ wi — c ´
— ¨ c´ ffi
¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
´ ´ ´ ´

´ ´ ´ ´
c ´
´T p q ` ´T p q ´I p q ` ´I
N p iq ‹
¯ ¯
´ T p xi q`T pxi q x x ´ I pxi q`I pxi q x x
— ffi
i“1 — M N M i N i M N M i
´˝ ´˝ ‚`
˚ ‹ ˚ ffi
— ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨c ˛ c ¨ c´ ffi
¯ c´ ¯˛
´ ´ ´ ´
— c´ ffi
´ ´ ´ ´
b
˚ 1`I M pxi q` 1`IN pxi q ‹ M p iq N p iq ‹
´F ` ´F
¯ ´ ¯ ´ ¯
1`I pxi q ` 1`I p xi q ´ F p xi q`F pxi q x x
— ffi
M N M N
´˝ ´˝
— ˚ ffi
– fl
2 2 ‚` 2 2 ‚

` ` ` ´ ´ ´ ` ` `
Theorem 2. If M = <TM pxi q, I M pxi q, FM pxi q, TM pxi q, I M pxi q, FM pxi q> and N <TN pxi q, IN pxi q, FN pxi q,
´ ´ ´
TN pxi q, IN pxi q, FN pxi q> are two BNSs in U, then the weighted cross entropy measure CB (M, N)w satisfies the
following properties:

(1) CB (M, N)w ě 0;


` ` ` ` ` ` ´ ´
(2) CB (M, N)w = 0 if, and only if, TM pxi q = TN pxi q, I M pxi q = IN pxi q, FM pxi q = FN pxi q, TM pxi q = TN pxi q,
´ ´ ´ ´
IM pxi q = IN pxi q, FM pxi q = FN pxi q, @ x P U;
(3) CB (M, N)w = CB (N, M)w ;
(4) CB (MC , NC )w = CB (M, N)w .

Proof is given in Appendix A.

Example 2. Suppose that M = <0.7, 0.3, 0.4, ´0.3, ´0.5, ´0.1> and N = <0.5, 0.2, 0.5, ´0.3, ´0.3, ´0.2> are
two BNSs and w = 0.4; then the weighted cross entropy between M and N is calculated as given below.
ˆ? ? ˙ b ˆ? ? ˙ c ˆ? ?
p1´0.3q`p1´0.2q
» b fi
1´0.3` 1´0.2
˙
0.7`0.5 0.7` 0.5 0.3`0.2 0.3` 0.2
— 2 ´ 2 ` 2 ´ 2 ` 2 ´ 2 ffi
ˆ? ? ˙ c
— ffi
˙ c
´p´ ´ q ´p´ q` ´p´ q ´p´ ´ ´p´0.5q` ´p´0.3q
ˆa a
q
ˆa a ˙
0.4`0.5 0.4` 0.5
— b ffi
0.3 0.3 0.3 0.3 0.5 0.3
CB p M, N qw “ 0.4 ˆ — ` ´ ` ´ ` ´ ffi “ 0.006953896.
— ffi
— 2 2 2 2 2 2 ffi
— c ˆ? ? ˙ c ffi
p1´0.5`p1´0.3q ´p´0.1´0.2q ´p´0.1q` ´p´0.2q
ˆa a
1´0.5` 1´0.3
– ˙ fl
` 2 ´ 2 ` 2 ´ 2

4. Cross Entropy Measure of IBNSs


This section extends the concepts of cross entropy and weighted cross entropy measures of BNSs
to IBNSs.

Definition 5. The cross entropy measure between any two IBNSs R = <[inf TR` pxi q, supTR` pxi q],
[inf IR` pxi q, supIR` pxi q], [inf FR` pxi q, supFR` pxi q], [inf TR´ pxi q, supTR´ pxi q], [inf IR´ pxi q, supIR´ pxi q], [inf FR´ pxi q,

54
ă ą
Axioms 2018, 7, 21

supFR´ pxi q]> and S = <[inf TS` pxi q, supTS` pxi q], [inf IS` pxi q, supIS` pxi q], [inf FS` pxi q, supFS` pxi q], [inf TS´ pxi q,
supTS´ pxi q], [inf IS´ pxi q, sup IS´ pxi q], [inf FS´ pxi q, supFS´ pxi q]> in U can be defined as follows.
¨b c ˛ c ¨b c ˛
` ` ` `
» fi
` ` ` `
c
infT pxi q`infT p xi q infT p xi q` infT pxi q supT pxi q`supT pxi q supT pxi q` supT pxi q
R R S R S
S ´˝ R S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ ¨b c ˛ ffi
` ` ` `
— ffi
` ` ` `
c c
R p iq
x ` infI pxi q
R p iq
x ` supI pxi q
R p iq
x `infI pxi q
R p iq
infI x `supI pxi q supI
— ffi
infI S supI S
S ´˝ S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ` `

R p iq
x ` r1´infI pxi qs
¯ ´ ¯ ´ ¯ ´ ¯
R p iq
` 1´infI pxi q 1´infI 1´supI p xi q ` 1´supI pxi q
— ffi
1´infI x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨b c ˛ c ¨b c ˛ ffi
` ` ` `
— ffi
` `
R p iq
1´supI x ` 1´supI pxi q infF pxi q`infF pxi q infF pxi q` infF pxi q
— ffi
S R R S
S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ´ ´
c
supF pxi q` supF pxi q
´ ¯
supF pxi q`supF pxi q ´ infT pxi q`infT pxi q
— ffi
R R S R S
S ´˝ ´
— ˚ ‹ ffi
— ffi
2 2 ‚` 2
n
— ffi
1 ÿ
(3)
— ffi
C IB pR, Sq “ —, ,ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
2 ´ ´ ´ ` ´ ´
c ´
´infT p xi q ` ´infT pxi q ´supT pxi q ` ´supT p xi q
¯
´ supT pxi q`supT pxi q
— ffi
i“1 R S R R S
S
´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛
´ ´
— ffi
´ ´ ´ ´
c c ´
R p iq
´infI ` ´infI pxi q
´ ¯ ¯
´ infI p xi q`infI pxi q x ´ supI p xi q`supI p xi q
— ffi
R S S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨b c ˛
´ ´ ´ ´
— ffi
´ ´

R p iq
´supI pxi q
R p iq
´supI ` x ` 1`infI pxi q
¯ ´ ¯
x 1`infI pxi q ` 1`infI pxi q 1`infI
— ffi
S R S S
´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
´ ´
— ffi
´ ´ ´ ´

R p iq
x ` 1`supI pxi q
¯ ´ ¯ ´ ¯
R p iq
` 1`supI pxi q 1`supI ´ infF pxi q`infF pxi q
— ffi
1`supI x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
´ ´ ´ ´
— ffi
´ ´
c ´
´infF p xi q ` ´infF p xi q ´supF pxi q ` ´supF pxi q
¯
´ supF pxi q`supF pxi q
— ffi
R S R S R S
´˝
— ˚ ‹ ˚ ‹ ffi
– fl
˝ 2 ‚` 2 2 ‚

Theorem 3. If R = <[inf TR` pxi q, supTR` pxi q], [inf, supIR` pxi q], [inf FR` pxi q, supFR` pxi q], [inf TR´ pxi q,
supTR´ pxi q], [inf IR´ pxi q, supIR´ pxi q], [inf FR´ pxi q, supFR´ pxi q]> and S = <[inf TS` pxi q, supTS` pxi q],
[inf IS` pxi q, supIS` pxi q], [inf FS` pxi q, supFS` pxi q], [inf TS´ pxi q, supTS´ pxi q], [inf IS´ pxi q, supIS´ pxi q], [inf FS´ pxi q,
supFS´ pxi q]> are two IBNSs in U, then the cross entropy measure CIB (R, S) satisfies the following properties:

(1) CIB (R, S) ě 0;


(2) CIB (R, S) = 0 for R = S i.e., inf TR` pxi q = inf TS` pxi q, supTR` pxi q = supTS` pxi q, inf IR` pxi q = inf IS` pxi q,
supIR` pxi q = supIS` pxi q, inf FR` pxi q = inf FS` pxi q, supFR` pxi q = supFS` pxi q, inf TR´ pxi q = inf TS´ pxi q,
supTR´ pxi q = supTS´ pxi q, inf IR´ pxi q = inf IS´ pxi q, supIR´ pxi q = supIS´ pxi q, inf FR´ pxi q = inf FS´ pxi q,
supFR´ pxi q = supFS´ pxi q @ x P U;
(3) CIB (R, S) = CIB (S, R);
(4) CIB (RC , SC ) = CIB (R, S).

Proof

(1) From the inequality stated in Theorem 1, we can easily get CIB (R, S) ě 0.
(2) Since infTR` pxi q = infTS` pxi q, supTR` pxi q = supTS` pxi q, infIR` pxi q = infIS` pxi q, supIR` pxi q = supIS` pxi q,
infFR` pxi q = infFS` pxi q, supFR` pxi q = supFS` pxi q, infTR´ pxi q = infTS´ pxi q, supTR´ pxi q = supTS´ pxi q,
infIR´ pxi q = infIS´ pxi q, supIR´ pxi q = supIS´ pxi q, infFR´ pxi q = infFS´ pxi q, supFR´ pxi q = supFS´ pxi q @
x P U, we have CIB (R, S) = 0.

55
ă ą
Axioms 2018, 7, 21

¨b c ˛ d ¨b c ˛
`` ˘ `` ˘ `` ˘ `` ˘ `` ˘ `` ˘ `` ˘ `` ˘
» d fi
infT xi `infT xi ˚ infTR xi ` infTS xi ‹ supT xi `supT xi ˚ supTR xi ` supTS xi ‹
— R S ´˚ ‹` R S ´˚ ‹` ffi

— 2 ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨b c ˛ d ¨b c ˛ ffi
`` ˘ `` ˘ `` ˘ `` ˘
— ffi
`` ˘ `` ˘ `` ˘ `` ˘
d
infI x ` infI x supI x ` supI x
— ffi
infI x `infI xi R i S i ‹ supI xi `supI xi R i S i ‹
R i
— ffi
S ´˝ R S
‚` ´˝ ‚`
— ˚ ˚ ffi
˚ ‹ ˚ ‹

— 2 2 2 2 ffi

— ffi
— ¨b c ˛ d ffi
`` ˘ `` ˘

ă ą
— ffi
`` ˘ `` ˘ `` ˘ `` ˘
d´ ¯ ´ ¯ ´ ¯ ´ ¯
r1´infI x ` 1´infI ` xi ` 1´supI
— ffi
xi 1´infI x 1´infI x 1´supI xi
R i R i S i ‹
— ffi
S ´˝ R S
‚` ´
— ˚ ffi
˚ ‹
2 2 2
— ffi
— ffi
— ffi
— c c ffi
¨b ˛ d ¨b ˛
` ` ` `
— ffi
` `` ˘ `` ˘ `
— ` ˘ ` ˘ ` ˘ ` ˘ ffi
1´supI x 1´supI x infF xi `infF xi infF x infF x
R i i ‹ i i ‹
— ffi
S ‹` R S R S
´ ‹`
— ˚ ˚ ffi
˚ ˚
2 2 2
— ffi
— ˝ ‚ ˝ ‚ ffi
— ffi
— ffi
— ¨b c ˛ d ffi
`` ˘ `` ˘ ` ` ´` ˘ ´` ˘
d ´ ¯
`
— ` ˘ ` ˘ ffi

supF xi `supF xi supF
R
x i supF
S
x i ‹ ´ infT x i `infT x i


R S ´
˚
‹` R S ´

n —
— 2
˚
˝ 2 ‚ 2


1 ř
(3) CIB (R, S) =
— ffi
— ffi
2
— ¨ c´ ¯ c´ ¨ c´ ¯ c´ ffi
´ ´ ´ ´
¯˛ d ´ ¯˛
´` ˘ ´` ˘
— ffi
´infT xi ` ´infT ´supT xi ` ´supT
` ˘ ` ˘ ¯ ` ˘ ` ˘
xi ‹ ´ `supT xi ‹
i “1 supT x x
— ffi
— R S ‹` R i S i R S ffi
´ ‹`
˚ ˚
— ˚ ˚ ffi

— ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨ c´ ffi
¯ c´
´ ´
¯˛ d ´
´ ´ ´ ´
— d ´ ffi
´infI xi ` ´infI
¯ ` ˘ ` ˘ ¯
´ infI xi `infI xi ‹ ´ `supI
` ˘ ` ˘ ` ˘ ` ˘
xi supI x x
— ffi

R S R S R i S i ffi
´ ‹` ´
— ˚ ffi
˚

— 2 ˝ 2 ‚ 2 ffi

— ffi
— c c c ffi
´` ˘ ´` ˘
¨ ´ ¯˛ d´ ¨b ˛
´ ´ ´ ´
— ¯ ´ ffi
´supI xi ` ´supI
¯ ´ ¯
xi ` 1`infI
` ˘ ` ˘
xi ‹ xi ` 1`infI 1`infI xi ‹
— ` ˘ ` ˘ ffi
R S 1`infI xi R S
R S
— ffi
‹` ´ ‹`
— ˚ ˚ ffi
˚ ˚

— ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨ c ˛ ffi
´ ´
— ffi
´ ´ ´ ´
d´ ¯ ´ ¯ b d ´ ¯
xi ` 1`supI
` ˘ ` ˘
xi ` 1`supI 1`supI xi ‹ ´ infF xi `infF
— ` ˘ ` ˘ ` ˘ ` ˘ ffi
— 1`supI xi R S xi ffi
— R S ´
˚
˚ ‹` R S ´

2 2 2
— ˝ ‚ ffi
— ffi
— ffi
— ¨ c´ c c c ffi
´` ˘ ´` ˘ ´ ´
— ˛ ¨ ˛ ffi
´ ´
¯ ´ ¯ d ´ ´ ` ˘¯ ´ ` ˘¯
´infF xi ` ´infF ´supF xi ` ´supF
¯
xi ‹ xi ‹
— ffi
´ supF xi `supF
` ˘ ` ˘

R S xi R S

— ˚
˚ ‹` R S ´
˚
˚ ‹ ffi
2 2 2
– ˝ ‚ ˝ ‚ fl

¨c ˛ d ¨c ˛
`` ˘ `` ˘ `` ˘ `` ˘
b b
`` ˘ `` ˘ `` ˘ `` ˘
» d fi
infT xi `infT xi ˚ infTS xi ` infTR xi ‹ supT xi `supT xi ˚ supTS xi ` supTR xi ‹
— S R ´˚ ‹` S R ´˚ ‹` ffi

— 2 ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨c ˛ d ¨c ˛ ffi
`` ˘ `` ˘ `` ˘ `` ˘
— b b ffi
`` ˘ `` ˘ `` ˘ `` ˘
d
˚ infIS xi ` infIR xi ‹ ˚ supIS xi ` supIR xi ‹
— ffi
— infI xi `infI xi supI xi `supI xi ffi
— S R ´ ˚ ‹` S R ´ ˚ ‹` ffi

— 2 ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨c ˛ d ffi
` `
— ffi
` ` ` `
d´ ¯ ´ ¯ b ´ ¯ ´ ¯
xi ` 1´infI
` ˘ ` ˘
xi ` 1´infI `
— ffi
1´infI xi ‹
` ˘ ` ˘ ` ˘ ` ˘
1´infI xi S R 1´supI x i 1´supI x
R i ´
— ffi
— S R ´
˚
˚ ‹` S ffi
2 2 2
— ˝ ‚ ffi
— ffi
— ffi
— c c ffi
¨ ˛ d ¨ ˛
` ` ` `
— b b ffi
`` ˘ `` ˘
˚ 1´supIS xi ` 1´supIR xi ‹ ˚ infFS xi ` infFR xi ‹
— ` ˘ ` ˘ ` ˘ ` ˘ ffi
infF xi `infF xi
— ffi

˚ ‹` S R ´ ˚ ‹`

2 2 2
— ffi
— ˝ ‚ ˝ ‚ ffi
— ffi
— ffi
— ¨ c ˛ ffi
` ` ˘ b ` ´ ´
d ´
` `
d ¯
˚ supFS xi ` supFR xi ‹
— ` ˘ ffi
´ infT xi `infT
` ˘ ` ˘
xi `supF
` ˘ ` ˘

supF xi xi ffi

S R ´ ‹` S R ´

n —
— 2
˚
˝ 2 ‚ 2


1 ř
=
— ffi
— ffi
2
— ¨ c´ c c c ffi
´` ˘ ´` ˘ ´ ´
˛ ¨ ˛
´ ´
— ¯ ´ ¯ d ´ ´ ` ˘¯ ´ ` ˘¯ ffi
´infT xi ` ´infT ´supT xi ` ´supT
¯
xi ‹ ´ supT xi `supT xi ‹
` ˘ ` ˘
i “1 xi
— ffi
— S R ‹` S R S R ffi
´ ‹`
˚ ˚
— ˚ ˚ ffi

— ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— c c ffi
´` ˘ ´` ˘
¨ ˛
´` ˘ ´` ˘ ´ ´
— d ´ ´ ¯ ´ ¯ d ´ ffi
´infI xi ` ´infI
¯ ` ˘¯
´ infI xi `infI xi ‹ ´ supI xi `supI
` ˘
xi xi
— ffi

S R S R S R

´˝ ‚` ´
— ˚ ffi
˚ ‹

— 2 2 2 ffi

— ffi
— ¨ c´ c c ffi
´` ˘ ´` ˘
˛ ¨ ˛
´ ´ ´ ´
— ¯ ´ ¯ d´ b ffi
´supI xi ` ´supI
` ˘¯ ´ ` ˘¯
˚ 1`infIS xi ` 1`infIR xi ‹
` ˘ ` ˘
xi ‹ xi ` 1`infI
— ffi
S R 1`infI xi
S R
— ffi
‚` ´˝ ‚`
— ˚ ffi
˚ ‹ ˚ ‹

— ˝ 2 2 2 ffi

— ffi
— ¨c ˛ d ffi
´` ˘ ´` ˘
— ffi
´` ˘ ´` ˘ ´
b
´
d´ ¯ ´ ¯ ´ ` ˘¯
xi ` 1`supI ˚ 1`supIS xi ` r1`supIR xi s ‹ ´ infF xi `infF
— ` ˘ ffi
1`supI x xi
R i
— ffi
S ´˝ S R
‚` ´
— ˚ ‹ ffi
2 2 2
— ffi
— ffi
— ffi
— ¨ c´ c´ ¨ c´ c´ ffi
´` ˘ ´` ˘ ´` ˘ ´` ˘
— ˛ d ˛ ffi
´ ´
¯ ¯ ¯ ¯
´infF xi ` ´infF ´supF xi ` ´supF
— ´ ` ˘¯
xi ‹ xi ‹

´ supF xi `supF
` ˘

S R xi S R

S R
‚` ´˝
— ˚ ˚ ffi
˚ ‹ ˚ ‹
2 2 2
– ˝ ‚ fl

= CIB (S, R).

56
ă ą
Axioms 2018, 7, 21

(4) CIB (RC , SC ) =


` pxi q` infF` pxi q ` pxi q` supF` pxi q
» c ¨b b ˛ c ¨b b ˛
` pxi q`infF` pxi q ` ` p xi q

‚` supFR pxi q`
infFR infFR supFS supFR
´˝ ´˝ ‚`
S S S

ă ą
— 2 2 2 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ c´ ¨ c´ ¯ c´ ¯˛
´infIR` pxi q ` ` p xi q ` p xi q ` ` p xi q
— ffi
´infIR` pxi q ` 1´infIS` pxi q 1´infIS ` pxi q ` 1´supI` pxq 1´supIR 1´supIS
c´ ¯ ´ ¯ ¯ ´ ¯
1´supIR
— ffi
1
— 1 S

´ ‹` ´˚ ‹`
— ˚ ‹ i ˚ ‹ ffi
— 2
˚ 2 2 2 ffi
— ˝ ‚ ˝ ‚ ffi
— ffi
— ffi
¨c ´ ¯ c ´ ¯˛ c
` `
— ffi
` pxi q `1´ 1´infI` pxi q ` pxi q `1´ 1´supI` pxi q
˚ 1´ 1´infIR pxi q ` 1´ 1´infIS pxi q ‹
c ´ ¯ ´ ¯ ´ ¯ ´ ¯
1´ 1´infIR 1´ 1´supIR
— ffi
— S S

´˚ ‹` ´
— ffi
2 2 2
— ffi
— ˝ ‚ ffi
— ffi
— ffi
¨c ´ ¯ c ´ ¯˛ c
` `
— ffi
˚ 1´ 1´supIR pxi q ` 1´ 1´supIS pxi q ‹ ` pxi q` infT` pxi q ` pxi q` supT` pxi q
— ¨b b ˛ c ¨b b ˛ ffi
` ` ` `
‹ ` infTR pxi q` p xi q ´ ˝ ‚` supTR pxi q` p xi q ´ ˝
— ffi
— infTS infTR S supTS supTR S ffi
— ˚ 2 2 2 2 2
‚ ffi
— ˝ ‚ ffi
— ffi
— ffi
— ¨ c c ˛ ¨ c c ˛ ffi
n ´ infFR´ pxi q`infFS´ pxi q ´infFR´ pxi q ` ´infFS´ pxi q ‹ ´supFR´ pxi q ` ´supFS´ pxi q ‹
´ ¯ ´ ¯
´ supFR´ pxi q`supFS´ pxi q
— c ´ ¯ c ´ ¯ ´ ¯ ´ ¯ ffi
— ffi
1 ř —
— ` 2 ´˚
˚
2
‹` 2 ´˚
˚
2
‹`


2
— ˝ ‚ ˝ ‚ ffi
— ffi
i “1
— ffi
— ¨ c c ˛ ffi
´ ´
´ ´1´infIR´ pxi q ` ´1´infIS´ pxi q
´ ¯ ´ ¯
˚ ´ ´1´infIR pxi q ` ´ ´1´infIS pxi q ‹ ´ ´1´supIR´ pxi q ` ´1´supIS´ pxi q
— c ´´ ¯ ´ ¯¯ c ´´ ¯ ´ ¯¯ ffi
— ffi
´˚ ‹` ´
— ffi
— 2 2 2

— ˝ ‚ ffi
— ffi
— ffi
— ¨c ´ c ˛ ¨ c c ˛ ffi
´ ´ ´ ´
¯ ´ ¯
´ ´
´ ¯ ´ ¯
˚ ´ ´1´supIR pxi q ` ´ ´1´supIS pxi q ‹ ˚ 1` ´1´infIR pxi q ` 1` ´1´infIS pxi q ‹
— c ´ ¯ ´ ¯ ffi
1` ´1´infIR pxi q `1` ´1´infIS pxi q
— ffi
— ffi

— ˚ 2
‹` 2 ´˚ 2
‹` ffi

— ˝ ‚ ˝ ‚ ffi
— ffi
— c ´ ˙ c ´ ffi
´ pxi q ` 1` ´1´supI´ pxi q
¨ ¯ ˛
´ ´ ´
— ffi
1` ´1´supIR
c ´ ¯ ´ ¯ c ´ ¯
1` ´1´supIR pxi q `1` ´1´supIS pxi q ´ infTR pxi q`infTS pxi q
— ffi
— S ffi
´˚ ‹` ´
— ˚ ‹ ffi
— 2 ˝ 2 ‚ 2 ffi
— ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ c ´ ¨ c´ c´ ˛
´ p xi q ` ´ p xi q ´ p xi q ` ´ p xi q
— ffi
´ ´
¯ ¯
´ ´ ´ ´
¯
´ supTR pxi q`supTS pxi q
— infTR infTS supTR supTS

— ffi
‹` ´˚
— ˚ ‹ ˚ ‹ ffi
– ˚ 2 2 2
‹ fl
˝ ‚ ˝ ‚

¨b c ˛ c ¨b c ˛
` ` ` `
» c fi
` ` infT pxi q` infT pxi q ` ` supT pxi q` supT pxi q
infT pxi q`infT p xi q R S supT pxi q`supT pxi q R S
R ´˝ R ´˝
S S ‚`
— ˚ ‹ ˚ ‹ ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ ¨b c ˛ ffi
` ` ` `
— ffi
` ` ` `
c c
R p iq
x ` infI pxi q
R p iq
x ` supI pxi q
R p iq
x `infI pxi q
R p iq
infI x `supI pxi q supI
— ffi
infI S supI S
S ´˝ S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ` `

R p iq
x ` 1´infI pxi q
¯ ´ ¯ ´ ¯ ´ ¯
R p iq
r1´infI ` 1´infI pxi q 1´infI 1´supI pxi q ` 1´supI p xi q
— ffi
x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨b c ˛ c ¨b c ˛ ffi
` ` ` `
— ffi
` `
R p iq
1´supI x ` 1´supI pxi q infF p xi q`infF pxi q infF pxi q` infF pxi q
— ffi
S R R S
S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ´ ´
c
supF pxi q` supF pxi q
´ ¯
supF pxi q`supF pxi q ´ infT p xi q`infT pxi q
— ffi
R R S R S
S ´˝ ´
— ˚ ‹ ffi
n
— ffi
— 2 2 ‚` 2 ffi
1 ř
=
— ffi
— ffi
2
¨ c´ ¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
´ ´ ´ ´
— ffi
´ ´
c ´
´infT pxi q ` ´infT p xi q ´supT pxi q ` ´supT pxi q
¯
´ supT pxi q`supT pxi q
— ffi
i “1 —
— ˚ R S ‹ R S
´˝
˚ R S
‚`
‹ ffi

— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛
´ ´
— ffi
´ ´ ´ ´
c c ´
R p iq
´infI ` ´infI pxi q
´ ¯ ¯
´ infI pxi q`infI pxi q x ´ supI pxi q`supI pxi q
— ffi
R S S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨b c ˛
´ ´ ´ ´
— ffi
´ ´

R p iq
´supI pxi q
R p iq
´supI ` x ` 1`infI pxi q
¯ ´ ¯
x 1`infI pxi q ` 1`infI pxi q 1`infI
— ffi
S R S S
´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
´ ´
— ffi
´ ´ ´ ´

R p iq
x ` 1`supI pxi q
¯ ´ ¯ ´ ¯
R p iq
` 1`supI pxi q 1`supI ´ infF p xi q`infF pxi q
— ffi
1`supI x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
´ ´ ´ ´
— ffi
´ ´
c ´
´infF pxi q ` ´infF pxi q ´supF pxi q ` ´supF pxi q
¯
´ supF p xi q`supF p xi q
— ffi
R S R S R S
´˝
— ˚ ‹ ˚ ‹ ffi
– fl
˝ 2 ‚` 2 2 ‚

= CIB (R, S). 

Example 3. Suppose that R = <[0.5, 0.8], [0.4, 0.6], [0.2, 0.6], [´0.3, ´0.1], [´0.5, ´0.1], [´0.5, ´0.2]>
and S = <[0.5, 0.9], [0.4, 0.5], [0.1, 0.4], [´0.5, ´0.3], [´0.7, ´0.3], [´0.6, ´0.3]> are two IBNSs; the cross
entropy between R and S is computed as follows:
» b ? ¯´? b ´? ? ¯ b ˆ? ? ˙ b ´? ? ¯ fi
0.5`0.5 0.5` 0.5
2 ´
2 ` 0.8`0.9
2 ´ 0.8` 0.9
2 ` 0.4`0.4
2 ´ 0.4` 0.4
2 ` 0.6`0.5
2 ´ 0.6` 0.5
2 `
ˆ? ? ? ?
— ffi
˙ ˆ ˙ b ˆ? ? ˙
1´0.4` r1´0.4s 1´0.6` r1´0.5s
— b b ffi
r1´0.4s`r1´0.4s r1´0.6s`r1´0.5s 0.2`0.1 0.2` 0.1
´ ` ´ ` ´ `
— ffi
2 2 2 2 2 2
— ffi
ˆ? ? ˆ? ?
— ffi
— ˆ? ? ˙ ˙ b ˙ ffi
´p´0.3q` ´p´0.5q ´p´0.1q` ´p´0.3q
b
´p´0.3´0.5q ´p´0.1´0.3q
b
0.6`0.4 0.6` 0.4
´ ` ´ ` ´ `
— ffi
— ffi
1— 2 2 2 2 2 2
C IB pR, Sq “ — ffi “ 0.02984616.
— ffi
ˆ? ? ˙ ˆ? ? ˙ b
2— ´p´0.5q` ´p´0.7q ´p´0.1q` ´p´0.3q
b b
´p´0.5´0.7q ´p´0.1´0.3q r1´0.5s`r1´0.7s

— 2 ´ 2 ` 2 ´ 2 ` 2 ´ ffi

— ˆ? ? ˙ b ˆ? ? ˙ b ˆ? ? ˙ ffi
1´0.5` r1´0.7s r1´0.1s`r1´0.3s 1´0.1` r1´0.3s ´p´0.5´0.6q ´p´0.5q` ´p´0.6q
— ffi
` ´ ` ´ `
— ffi
— 2 2 2 2 2 ffi
— ˆ? ? ˙ ffi
´p´0.2q` ´p´0.3q
b
´p´0.2´0.3q
– fl
2 ´ 2

57
ă ą
Axioms 2018, 7, 21

n
ř
Definition 6. Let wi be the weight of each element xi , i = 1, 2, ..., n, and wi P [0, 1] with wi = 1; then the
i “1
weighted cross entropy measure between any two IBNSs R = <[inf TR` pxi q, supTR` pxi q], [inf IR` pxi q, supIR` pxi q],
[inf FR` pxi q, supFR` pxi q], [inf TR´ pxi q, supTR´ pxi q], [inf IR´ pxi q, supIR´ pxi q], [inf FR´ pxi q, supFR´ pxi q]> and
S = <[inf TS` pxi q, supTS` pxi q], [inf IS` pxi q, supIS` pxi q], [inf FS` pxi q, supFS` pxi q], [inf TS´ pxi q, supTS´ pxi q],
[inf IS´ pxi q, supIS´ pxi q], [inf FS´ pxi q, supFS´ pxi q]> in U can be defined as follows.
¨b c ˛ c ¨b c ˛
` ` ` `
» c fi
` ` infT pxi q` infT pxi q ` ` supT pxi q` supT p xi q
infT pxi q`infT pxi q R S supT pxi q`supT pxi q R S
R ´˝ R ´˝
S S ‚`
— ˚ ‹ ˚ ‹ ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ ¨b c ˛ ffi
` ` ` `
— ffi
` ` ` `
c c
R p iq
x ` infI pxi q
R p iq
x ` supI p xi q
R p iq
x `infI pxi q
R p iq
infI x `supI pxi q supI
— ffi
infI S supI S
S ´˝ S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ` `

R p iq
x ` 1´infI pxi q
¯ ´ ¯ ´ ¯ ´ ¯
R p iq
` 1´infI p xi q 1´infI 1´supI pxi q ` 1´supI pxi q
— ffi
1´infI x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨b ffi
c´ ¯˛ ¨b c ˛
` ` ` `
— ffi
` `
R p iq
1´supI pxi q
c
1´supI x ` infF p xi q`infF p xi q infF p xi q` infF pxi q
— ffi
S R R S
S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ´ ´
c
supF pxi q` supF pxi q
´ ¯
supF pxi q`supF p xi q ´ infT pxi q`infT pxi q
— ffi
R R S R S
S ´˝ ´
— ˚ ‹ ffi
— ffi
2 2 ‚` 2
n
— ffi
1 ÿ
(4)
— ffi
C IB pR, Sqw “ wi
— ¨ c´ ffi
¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
2 ´ ´ ´ ´
— ffi
´ ´
c ´
´infT pxi q ` ´infT pxi q ´supT p xi q ` ´supT pxi q
¯
´ supT pxi q`supT pxi q
— ffi
i“1 R S R R S
S
´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛
´ ´
— ffi
´ ´ ´ ´
c c ´
R p iq
´infI ` ´infI p xi q
´ ¯ ¯
´ infI pxi q`infI pxi q x ´ supI pxi q`supI p xi q
— ffi
R S S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨b c ˛
´ ´ ´ ´
— ffi
´ ´

R p iq
´supI pxi q
R p iq
´supI ` x ` 1`infI pxi q
¯ ´ ¯
x 1`infI pxi q ` 1`infI pxi q 1`infI
— ffi
S R S S
´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
´ ´
— ffi
´ ´ ´ ´

R p iq
x ` r1`supI pxi qs
¯ ´ ¯ ´ ¯
R p iq
` 1`supI pxi q 1`supI ´ infF pxi q`infF pxi q
— ffi
1`supI x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
´ ´ ´ ´
— ffi
´ ´
c ´
´infF pxi q ` ´infF pxi q ´supF pxi q ` ´supF pxi q
¯
´ supF pxi q`supF pxi q
— ffi
R S R S R S
´˝
— ˚ ‹ ˚ ‹ ffi
– fl
˝ 2 ‚` 2 2 ‚

Theorem 4. For any two IBNSs R = <[inf TR` pxi q, sup TR` pxi q], [inf IR` pxi q, supIR` pxi q], [inf FR` pxi q,
supFR` pxi q], [inf TR´ pxi q, supTR´ pxi q], [inf IR´ pxi q, supIR´ pxi q], [inf FR´ pxi q, supFR´ pxi q]> and S = <[inf TS` pxi q,
supTS` pxi q], [inf IS` pxi q, supIS` pxi q], [inf FS` pxi q, supFS` pxi q], [inf TS´ pxi q, supTS´ pxi q], [inf IS´ pxi q,
supIS´ pxi q], [inf FS´ pxi q, supFS´ pxi q]> in U, the weighted cross entropy measure CIB (R, S)w also satisfies
the following properties:

(1) CIB (R, S)w ě 0;


(2) CIB (R, S)w = 0 if, and only if, R = S i.e., inf TR` pxi q = inf TS` pxi q, supTR` pxi q = supTS` pxi q, inf IR` pxi q
= inf IS` pxi q, supIR` pxi q = supIS` pxi q, inf FR` pxi q = inf FS` pxi q, supFR` pxi q = supFS` pxi q, inf TR´ pxi q
= inf TS´ pxi q, supTR´ pxi q = supTS´ pxi q, inf IR´ pxi q = inf IS´ pxi q, supIR´ pxi q = supIS´ pxi q, inf FR´ pxi q =
inf FS´ pxi q, supFR´ pxi q = supFS´ pxi q @ x P U;
(3) CIB (R, S)w = CIB (S, R)w ;
(4) CIB (RC , SC )w = CIB (R, S)w .

The proofs are presented in Appendix B.

58
Axioms 2018, 7, 21

Example 4. Consider the two IBNSs R = <[0.5, 0.8], [0.4, 0.6], [0.2, 0.6], [´0.3, ´0.1], [´0.5, ´0.1],
[´05, ´0.2]> and S = <[0.5, 0.9], [0.4, 0.5], [0.1, 0.4], [´0.5, ´0.3], [´0.7, ´0.3], [´0.6, ´0.3]>, and let
w = 0.3; then the weighted cross entropy between R and S is calculated as follows:
» b ˆ? ? ˙ ˆ? ? ˙˙ b ˆ? ˆ? ? ˙ ? fi
0.5`0.5 0.5` 0.8`0.9 0.8` 0.4`0.4
0.6`0.5 0.6` 0.50.4`
b b
0.5 0.90.4
— 2 ´ 2 ` 2 ´ 2 ` ` 2 ´2 ´ 2 2 ` ffi
ˆ? ˆ? ˆ? ? ˙
— c c ffi
r1´0.4s`r1´0.4s 1´0.4` r1´0.4s r1´0.5s 1´0.6`
a
r1´0.6s`r1´0.5s
˙ a ˙ b
0.2`0.1 0.2` 0.1
— ffi
´ ` ´ ` ´ `
— ffi
2 2 2 2 2 2
— ffi
— ffi
— ˆ? ? ˙ c ˙ c ffi
´p´0.3´0.5q ´p´0.3q` ´p´0.5q ´p´0.1´0.3q ´p´0.1q` ´p´0.3q
ˆa a ˆa a ˙
0.6`0.4 0.6` 0.4
— b ffi
´ ` ´ ` ´ `
— ffi
1 — 2 2 2 2 2 2 ffi
C IB p R, Sq “ ˆ 0.3 ˆ — ffi “ 0.00895385.
— ffi
c ˙ c ˙ c
2 ´p´0.5´0.7q ´p´0.5q` ´p´0.7q ´p´0.1´0.3q ´p´0.1q` ´p´0.3q
ˆa a ˆa a
— r1´0.5s`r1´0.7s ffi

— 2 ´ 2 ` 2 ´ 2 ` 2 ´ ffi

ˆ? ˆ?
— ffi
˙ c ˙ c
1´0.5` r1´0.7s 1´0.1` r1´0.3s ´p´0.5´0.6q ´p´0.5q` ´p´0.6q
a
r1´0.1s`r1´0.3s
— a ˆa a ˙ ffi
` ´ ` ´ `
— ffi
— ffi
— 2 2 2 2 2 ffi
— c ffi
´p´0.2´0.3q ´p´0.2q` ´p´0.3q
– ˆa a ˙ fl
2 ´ 2

5. MADM Strategies Based on Cross Entropy Measures


In this section, we propose two new MADM strategies based on weighted cross entropy measures
in bipolar neutrosophic and interval bipolar neutrosophic environments. Let B = {B1 , B2 , . . . , Bm }
(m ě 2) be a discrete set of m feasible alternatives which are to be evaluated based on n attributes
C = {C1 , C2 , . . . , Cn } (n ě 2) and let wj be the weight vector of the attributes such that 0 ď wj ď 1 and
n
ř
w j = 1.
j “1

5.1. MADM Strategy Based on Weighted Cross Entropy Measures of BNS


The procedure for solving MADM problems in a bipolar neutrosophic environment is presented
in the following steps:
Step 1. The rating of the performance value of alternative Bi (i = 1, 2, . . . , m) with respect to the
predefined attribute Cj (j = 1, 2, . . . , n) can be expressed in terms of bipolar neutrosophic information
as follows:

Bi “ tCj , ă TB` pCj q, IB` pCj q, FB` pCj q, TB´ pCj q, IB´ pCj q, FB´ pCj q ą |Cj P Cj , j “ 1, 2, . . . , nu,
i i i i i i

where 0 ď TB` pCj q + IB` pCj q + FB` pCj q ď 3 and ´3 ď TB´ pCj q + IB´ pCj q + FB´ pCj q ď 0, i = 1, 2, . . . , m; j = 1,
i i i i i i
2, . . . , n.
` ` ` ´ ´ ´
Assume that dij = <Tij , Iij , Fij , Tij , Iij , Fij > is the bipolar neutrosophic decision matrix whose
r
entries are the rating values of the alternatives with respect to the attributes provided by the expert or
decision-maker. The bipolar neutrosophic decision matrix rdrij smˆn can be expressed as follows:

C1 C2 . . . Cn
¨ ˛

B1 ˚ 11 d12 . . .
˚ d d1n ‹

rdrij smˆn “ B2 ‹ .
˚ 21 d22 . . .
˚ d d2n ‹
. . . ... .
˚ ‹
˚ ‹
˚ ‹
. ˝ . . ... . ‚
Bm dm1 dm2 . . . dmn

Step 2. The positive ideal solution (PIS) <p* = (d˚1 , d˚2 , ..., d˚n )> of the bipolar neutrosophic
information is obtained as follows:

59
Axioms 2018, 7, 21

A E
p˚j “ Tj˚` , Ij˚` , Fj˚` , Tj˚´ , Ij˚´ , Fj˚´
“ ă rtMax pTij` q|j P H1 u; tMin pTij` q|j P H2 us,
i i
rtMin pIij q|j P H1 u; tMax pIij q|j P H2 us, rtMin pFij` q|j P H1 u; tMax pFij` q|j P H2 us,
` `
i i i i
rtMin pTij´ q|j P H1 u; tMax pTij´ q|j P H2 us, rtMax pIij´ q|j P H1 u; tMin pIij´ q|j P H2 us,
i i i i
rtMax pFij´ q|j P H1 u; tMin pFij´ q|j P H2 us ą, j “ 1, 2, . . . , n;
i i

where H1 and H2 represent benefit and cost type attributes, respectively.


Step 3. The weighted cross entropy between an alternative Bi , i = 1, 2, . . . , m, and the ideal
alternative p* is determined by
» c c c
Tij` `Tj˚` Tij` ` Tj˚` Iij` ` Ij˚`
˜b ¸
Iij` `Ij˚`
˜b ¸
r1´Iij` s`r1´Ij˚` s
b b fi
— 2 ´ 2 ` 2 ´ 2 ` 2 ´ ffi
— ffi
— ¸ c ¸ c ´ ffi
1´Ij` ` r1´Ij˚` s Fij` ` Fj˚` ´ Tij´ `Tj˚´
˜b
Fij` `Fj˚`
— b ˜b b ¯

— ffi
— 2 ` 2 ´ 2 ` 2 ´ ffi
ÿn — ffi
(5)
— ¨ c´ ¯ c´ ¨ c´ ¯ c´ ffi
CB pBi , p˚qw “ wi — ´Tij´ ` ´Tj˚´
¯˛
´ ˚´
¯˛ ffi.
c ´ c
´ ˚´ ´I ` ´I r1`I ´ s`r1`I ˚´ s
¯
— ´ I `I ij
ij j j ij j

i“1 — ˝ ‚` ´˝ ‚` ffi
— 2 2 2 2 ffi
— ffi
— ¨ c´ ¯ c´ ¯˛

´Fij´ ` ´Fj˚´
c
1`Iij´ ` r1`Ij˚´ s ´ Fij´ `Fj˚´
— ˜b b ¸ ´ ¯ ffi
— ffi
– ´ 2 ` 2 ´˝ 2
‚ fl

Step 4. A smaller value of CB (Bi , p*)w , i = 1, 2, ..., m represents that an alternative Bi , i = 1, 2, . . . , m


is closer to the PIS p*. Therefore, the alternative with the smallest weighted cross entropy measure is
the best alternative.

5.2. MADM Strategy Based on Weighted Cross Entropy Measures of IBNSs


The steps for solving MADM problems with interval bipolar neutrosophic information are
presented as follows.
Step 1. In an interval bipolar neutrosophic environment, the rating of the performance value
of alternative Bi (i = 1, 2, . . . , m) with respect to the predefined attribute Cj (j = 1, 2, . . . , n) can be
represented as follows:

Bi “ tCj , ă rinfTB` pCj q, supTB` pCj qs, rinfIB` pCj q, supIB` pCj qs, rinfFB` pCj q, supFB` pCj qs,
i i i i i i
rinfTB´ pCj q, supTB´ pCj qs, rinfIB´ pCj q, supIB´ pCj qs, rinfFB´ pCj q, supFB´ pCj qs ą |Cj P Cj ,
i i i i i i
j “ 1, 2, . . . , nu

where 0 ď supTB` pCj q + supIB` pCj q + supFB` pCj q ď 3 and ´3 ď supTB´ pCj q + supIB´ pCj q + supFB´ pCj q ď 0;
i i i i i i
j = 1, 2, . . . , n. Let grij = <[ L T ` U ` L ` U ` L ` U ` L ´ U ´ L ´ U ´ L ´ U ´
ij , T ij ], [ I ij , I ij ], [ F ij , F ij ], [ T ij , T ij ], [ I ij , I ij ], [ F ij , F ij ]>
be the bipolar neutrosophic decision matrix whose entries are the rating values of the alternatives with
respect to the attributes provided by the expert or decision-maker. The interval bipolar neutrosophic
decision matrix rr gij smˆn can be presented as follows:

C1 C2 . . . Cn
¨ ˛

B1 ˚ g g
˚ 11 12 ... g1n ‹

gij smˆn “ B2
rr ˚ g g ... g2n ‹ .
˚ 21 22 ‹
. . . ... .
˚ ‹
˚ ‹
˚ ‹
. ˝ . . ... . ‚
Bm gm1 gm2 ... gmn

60
Axioms 2018, 7, 21

Step 2. The PIS <q* = (g1˚ , g2˚ , ..., gn˚ )> of the interval bipolar neutrosophic information is obtained
as follows:

q˚j “ ă r L T ˚`
ij ,
U T ˚` s, r L I ˚` , U I ˚` s, r L F ˚` , U F ˚` s, r L T ˚´ , U T ˚´ s, r L I ˚´ , U I ˚´ s, r L F ˚´ , U F ˚´ s
ij ij ij ij ij ij ij ij ij ij ij ą,
“ ă rtMax p L T ` ij q|j P H1 u; tMin pL T `
ij q|j P H2 u, tMax pU T `
ij q|j P H1 u; tMin pU T ` ij q|j P H2 us,
i i i i
rtMin p L I `
ij q|j P H1 u; tMax pL I ` ij q|j P H2 u, tMin pU I `
ij q|j P H1 u; tMax pU I `
ij q|j P H2 us,
i i i i
rtMin p F ij q|j P H1 u; tMax p F ij q|j P H2 u, tMin p F ij q|j P H1 u; tMax pU F `
L ` L ` U `
ij q|j P H2 us,
i i i i
rtMin p T ij q|j P H1 u; tMax p T ij q|j P H2 u, tMin p T ij q|j P H1 u; tMax p T ´
L ´ L ´ U ´ U
ij q|j P H2 us,
i i i i
rtMax p L I ´ij q|j P H1 u; tMin p L I ´ q|j P H u, tMax pU I ´ q|j P H u; tMin pU I ´ q|j P H us,
ij 2 ij 1 ij 2
i i i i
rtMax p L F ´ ij q|j P H 1 u; tMin p L F ´ q|j P H u, tMax pU F ´ q|j P H u; tMin pU F ´ q|j P H us ą,
ij 2 ij 1 ij 2
i i i i
j “ 1, 2, . . . , n

where H1 and H2 stand for benefit and cost type attributes, respectively.
Step 3. The weighted cross entropy between an alternative Bi , i = 1, 2, . . . , m, and the ideal
alternative q* under an interval bipolar neutrosophic setting is computed as follows:
` ` L T ˚` U T`` U T ˚` L I `` L I ˚`
¨b b ˛ c ¨b b ˛ c ¨b b ˛
` `L T ˚` U T ` `U T ˚` L I ` ` L I ˚`
» c fi
LT LT
´˝ ‚` ´˝ ‚` ´˝ ‚`
ij j ij j ij j ij j ij j ij j
— 2 2 2 2 2 2 ffi
— ffi
— ffi
´L I ` ` r1´L I ˚` U I `` U I ˚`
¨b b ˛ c ¨b b ˛ c
r1´L I ` s`r1´L I ˚` U I ` `U I ˚` U ` ˚` s
— c ffi
s s
‚` r1´ I ij s`r 1´U I j
1
— ffi
´˝ ‚` ´˝ ´
— ij j j j ij j ij j ffi
— 2 2 2 2 2 ffi
— ffi
— d ffi
´ ˚´
— ffi
` ˚` ` ˚` ` ˚`
¨b b ˛ c ¨b b ˛ c ¨b b ˛ ´ ¯
1´U I j ` r1´U I j s L F ` ` L F ˚` ` LFj U F ` `U F ˚` ` UFj ´ T ij ` T j
— L L

— LF UF ffi
` ´ ` ´ ` ´
— ij j ij ij j ij ffi
— ˝ 2
‚ 2
˝ 2
‚ 2
˝ 2
‚ 2 ffi
— ffi
— ¨ c´ c ˛ ¨ c c ˛ ¨ c c ˛ ffi
n
´ ˚´ ´ ˚´ ´ ˚´
d ´ d ´
´ UT´ `U T ˚´ ´ LI´ `L I ˚´ (6)
— ¯ ´ ¯ ´ ¯ ´ ¯ ´ ¯ ´ ¯ ffi
´ T ij ` ´ Tj ´ T ij ` ´ Tj ´ I ij ` ´ Ij
¯ ¯
1
C IB p Bi , q˚qw “ ˆ
ÿ — L L U U L L ffi
wi —
— ij j ij j
ffi.
2 ‹` ´˚ ‹` ´˚ ‹` ffi
˚ ‹ ˚ ‹ ˚ ‹
i“1 — ˚ 2 2 2 2 2 ffi
— ˝ ‚ ˝ ‚ ˝ ‚ ffi
— ffi
— ¨ c´ c ˛ ffi
´ ˚´
— d ´ ffi
´ ˚´
¯ ´ ¯
´ I ij ` ´ Ij ´ r1`L I ˚´
¯ ¨b b ˛ c
´ U I ij `U I j ´ ˚´ s ´ ˚´ s
— U U
c ffi
r1` L I ij s`r1` L I j 1` L I ij ` s r 1`U I ij s`r1`U I j
— ffi
´˚ ‹` ´˝ ‚` ´
j
— ˚ ‹ ffi
— 2 ˝ 2 ‚ 2 2 2 ffi
— ffi
— ffi
— ¨ c´ c´ ˛ d ¨ c´ c´ ˛ ffi
LF´ ` L F ˚´ UF´ ` U F ˚´
— d ffi
´ ˚´ ´ ˚´
¯ ¯ ¯ ¯
´ r1`U I ˚´ ´ ´ ´ ´
¨b b ˛ ´ ¯ ´ ¯
´ L F ij `L F j ´ U F ij `U F j
— ffi
1`U I ij ` s ij j ij j
— ffi
j
‚` ´˚ ‹` ´˚
— ˚ ‹ ˚ ‹ ffi
– ˝ 2 2 2 2 2
‹ fl
˝ ‚ ˝ ‚

Step 4. A smaller value of CIB (Bi , p*)w , i = 1, 2, ..., m indicates that an alternative Bi , i = 1, 2, . . . , m
is closer to the PIS q*. Hence, the alternative with the smallest weighted cross entropy measure will be
identified as the best alternative.
A conceptual model of the proposed strategy is shown in Figure 1.

Figure 1. Conceptual model of the proposed strategy.

61
Axioms 2018, 7, 21

6. Illustrative Example
In this section we solve two numerical MADM problems and a comparison with other existing
strategies is presented to verify the applicability and effectiveness of the proposed strategies in bipolar
neutrosophic and interval bipolar neutrosophic environments.

6.1. Car Selection Problem with Bipolar Neutrosophic Information


Consider the problem discussed in [81,86–88] where a buyer wants to purchase a car based on
some predefined attributes. Suppose that four types of cars (alternatives) Bi , (i = 1, 2, 3, 4) are available
in the market. Four attributes are taken into consideration in the decision-making environment, namely,
Fuel economy (C1 ), Aerod (C2 ), Comfort (C3 ), Safety (C4 ), to select the most desirable car. Assume that
the weight vector for the four attributes is known and given by w = (w1 , w2 , w3 , w4 ) = (0.5, 0.25, 0.125,
@ D
0.125). Therefore, the bipolar neutrosophic decision matrix dij 4ˆ4 can be obtained as given below.
The bipolar neutrosophic decision matrix rdrij s = 4ˆ4

C1 C2 C3 C4
B1 <0.5, 0.7, 0.2, ´0.7, ´0.3, ´0.6> <0.4, 0.4, 0.5, ´0.7, ´0.8, ´0.4> <0.7, 0.7, 0.5, ´0.8, ´0.7, ´0.6> <0.1, 0.5, 0.7, ´0.5, ´0.2, ´0.8>
B2 <0.9, 0.7, 0.5, ´0.7, ´0.7, ´0.1> <0.7, 0.6, 0.8, ´0.7, ´0.5, ´0.1> <0.9, 0.4, 0.6, ´0.1, ´0.7, ´0.5> <0.5, 0.2, 0.7, ´0.5, ´0.1, ´0.9>
B3 <0.3, 0.4, 0.2, ´0.6, ´0.3, ´0.7> <0.2, 0.2, 0.2, ´0.4, ´0.7, ´0.4> <0.9, 0.5, 0.5, ´0.6, ´0.5, ´0.2> <0.7, 0.5, 0.3, ´0.4, ´0.2, ´0.2>
B4 <0.9, 0.7, 0.2, ´0.8, ´0.6, ´0.1> <0.3, 0.5, 0.2, ´0.5, ´0.5, ´0.2> <0.5, 0.4, 0.5, ´0.1, ´0.7, ´0.2> <0.2, 0.4, 0.8, ´0.5, ´0.5, ´0.6>

The positive ideal bipolar neutrosophic solutions are computed from rdrij s4ˆ4 as follows:

p* = [<0.9, 0.4, 0.2, ´0.8, ´0.3, ´0.1>, <0.7, 0.2, 0.2, ´0.7, ´0.5, ´0.1>,
<0.9, 0.4, 0.5, ´0.8, ´0.5, ´0.2>, <0.7, 0.2, 0.3, ´0.5, ´0.1, ´0.2>].

Using Equation (5), the weighted cross entropy measure CB (Bi , p*)w is obtained as follows:

CB (B1 , p*)w = 0.0734, CB (B2 , p*)w = 0.0688, CB (B3 , p*)w = 0.0642, CB (B4 , p*)w = 0.0516. (7)

According to the weighted cross entropy measure CB (Bi , p*)w , the order of the four alternatives is
B4 ă B3 ă B2 ă B1 ; therefore, B4 is the best car.
We compare our obtained result with the results of other existing strategies (see Table 1), where
the known weight of the attributes is given by w = (w1 , w2 , w3 , w4 ) = (0.5, 0.25, 0.125, 0.125). It is to be
noted that the ranking results obtained from the other existing strategies are different from the result
of the proposed strategies in some cases. The reason is that the different authors adopted different
decision-making strategies and thereby obtained different ranking results. However, the proposed
strategies are simple and straightforward and can effectively solve decision-making problems with
bipolar neutrosophic information.

Table 1. The results of the car selection problem obtained from different methods.

Methods Ranking Results Best Option


The proposed weighted cross entropy measure B4 ă B3 ă B2 ă B1 B4
Dey et al.’s TOPSIS strategy [87] B1 ă B3 ă B2 ă B4 B4
Deli et al.’s strategy [81] B1 ă B2 ă B4 ă B3 B3
Projection measure [88] B3 ă B4 ă B1 ă B2 B2
Bidirectional projection measure [88] B2 ă B1 ă B4 ă B3 B3
Hybrid projection measure [88] with ρ = 0.25 B2 ă B1 ă B3 ă B4 B4
Hybrid projection measure [88] with ρ = 0.50 B3 ă B2 ă B1 ă B4 B4
Hybrid projection measure [88] with ρ = 0.75 B1 ă B3 ă B4 ă B2 B2
Hybrid projection measure [88] with ρ = 0.90 B3 ă B4 ă B2 ă B1 B1
Hybrid similarity measure [88] with ρ = 0.25 B2 ă B4 ă B1 ă B3 B3
Hybrid similarity measure [88] with ρ = 0.30 B2 ă B4 ă B1 ă B3 B3
Hybrid similarity measure [88] with ρ = 0.60 B2 ă B4 ă B1 ă B3 B3
Hybrid similarity measure [88] with ρ = 0.90 B2 ă B4 ă B3 ă B1 B1

62
Axioms 2018, 7, 21

6.2. Interval Bipolar Neutrosophic MADM Investment Problem


Consider an interval bipolar neutrosophic MADM problem studied in [91] with four possible
alternatives with the aim to invest a sum of money in the best choice. The four alternatives are:

> a food company (B1 ),


> a car company (B2 ),
> an arms company (B3 ), and
> a computer company (B4 ).

The investment company selects the best option based on three predefined attributes, namely,
growth analysis (C1 ), risk analysis (C2 ), and environment analysis (C3 ). We consider C1 and C2 to be
benefit type attributes and C3 to be a cost type attribute based on Ye [93]. Assume that the weight
vector [91] of C1 , C2 , and C3 is given by w = (w1 , w2 , w3 ) = (0.35, 0.25, 0.4). The interval bipolar
neutrosophic decision matrix rr gij s4ˆ3 presented by the decision-maker or expert is as follows.
Interval bipolar neutrosophic decision matrix rr gij s4ˆ3 =

C1
¨ ˛
B1 rr0.4, 0.5s, r0.2, 0.3s, r0.3, 0.4s, r´0.3, ´0.2s, r´0.4, ´0.3s, r´0.5, ´ 0.4ss
˚
˚ B2 rr0.6, 0.7s, r0.1, 0.2s, r0.2, 0.3s, r´0.2, ´0.1s, r´0.3, ´0.2s, r´0.7, ´ 0.6ss ‹

B3 0.6s, 0.3s, 0.4s, r´0.3, r´0.6, ´ 0.3ss
˚ ‹
˝ rr0.3, r0.2, r0.3, ´0.2s, r´0.4, ´0.3s, ‚
B4 rr0.7, 0.8s, r0.0, 0.1s, r0.1, 0.2s, r´0.1, ´0.0s, r´0.2, ´0.1s, r´0.8, ´ 0.7ss
C2
¨ ˛
B1 rr0.4, 0.6s, r0.1, 0.3s, r0.2, 0.4s, r´0.3, ´0.1s, r´0.4, ´0.2s, r´0.6, ´ 0.4ss
˚
˚ B2 rr0.6, 0.7s, r0.1, 0.2s, r0.2, 0.3s, r´0.2, ´0.1s, r´0.3, ´0.2s, r´0.7, ´ 0.6ss ‹

B3 0.6s, 0.3s, 0.4s, r´0.3, ´0.2s, r´0.6, ´ 0.5ss
˚ ‹
˝ rr0.5, r0.2, r0.3, r´0.4, ´0.3s, ‚
B4 rr0.6, 0.7s, r0.1, 0.2s, r0.1, 0.3s, r´0.2 ´ 0.1s, r´0.3, ´0.1s, r´0.7, ´ 0.6ss
C3
¨ ˛
B1 rr0.7, 0.9s, r0.2, 0.3s, r0.4, 0.5s, r´0.3, ´0.2s, r´0.5, ´0.4s, r´0.9, ´ 0.7ss
˚
˚ B2 rr0.3, 0.6s, r0.3, 0.5s, r0.8, 0.9s, r´0.5, ´0.3s, r´0.9, ´0.8s, r´0.6, ´ 0.3ss ‹

B3 0.5s, 0.4s, 0.9s, r´0.4, r´0.5, ´ 0.4ss
˚ ‹
˝ rr0.4, r0.2, r0.7, ´0.2s, r´0.9, ´0.7s, ‚
B4 rr0.6, 0.7s, r0.3, 0.4s, r0.8, 0.9s, r´0.4, ´0.3s, r´0.9, ´0.8s, r´0.7, ´ 0.6ss
From the matrix rr
gij s4ˆ3 , we determine the positive ideal interval bipolar neutrosophic solution
(q*) by using Equation (6) as follows:

q* = <[0.7, 0.8], [0.0, 0.1], [0.1, 0.2], [´0.3, ´0.2], [´0.2, ´0.1], [´0.5, ´0.3]>;
<[0.6, 0.7], [0.1, 0.2], [0.1, 0.3], [´0.3, ´0.2], [´0.3, ´0.1], [´0.6, ´0.4]>;
<[0.3, 0.5], [0.3, 0.5], [0.8, 0.9], [´0.3, ´0.2], [´0.9, ´0.8], [´0.9, ´0.7]>.

The weighted cross entropy between an alternative Bi , i = 1, 2, . . . , m, and the ideal alternative q*
can be obtained as given below:

CIB (B1 , q*)w = 0.0606, CIB (B2 , q*)w = 0.0286, CIB (B3 , q*)w = 0.0426, CIB (B4 , q*)w = 0.0423.

On the basis of the weighted cross entropy measure CIB (Bi , q*)w , the order of the four alternatives
is B2 ă B4 ă B3 ă B1 ; therefore, B2 is the best choice.
Next, the comparison of the results obtained from different methods is presented in Table 2 where
the weight vector of the attribute is given by w = (w1 , w2 , w3 ) = (0.35, 0.25, 0.4). We observe that B2 is the
best option obtained using the proposed method and B4 is the best option obtained using the method
of Mahmood et al. [91]. The reason for this may be that we use the interval bipolar neutrosophic

63
Axioms 2018, 7, 21

cross entropy method whereas Mahmood et al. [91] derived the most desirable alternative based on a
weighted arithmetic average operator in an interval bipolar neutrosophic setting.

Table 2. The results of the investment problem obtained from different methods.

Methods Ranking Results Best Option


The proposed weighted cross entropy measure B2 ă B4 ă B3 ă B1 B2
Mahmood et al.’s strategy [91] B2 ă B3 ă B1 ă B4 B4

7. Conclusions
In this paper we defined cross entropy and weighted cross entropy measures for bipolar
neutrosophic sets and proved their basic properties. We also extended the proposed concept to
the interval bipolar neutrosophic environment and proved its basic properties. The proposed cross
entropy measures were then employed to develop two new multi-attribute decision-making strategies.
Two illustrative numerical examples were solved and comparisons with existing strategies were
provided to demonstrate the feasibility, applicability, and efficiency of the proposed strategies. We hope
that the proposed cross entropy measures can be effective in dealing with group decision-making,
data analysis, medical diagnosis, selection of a suitable company to build power plants [94], teacher
selection [95], quality brick selection [96], weaver selection [97,98], etc. In future, the cross entropy
measures can be extended to other neutrosophic hybrid environments, such as bipolar neutrosophic
soft expert sets, bipolar neutrosophic refined sets, etc.

Author Contributions: Surapati Pramanik and Partha Pratim Dey conceived and designed the experiments; Surapati
Pramanik performed the experiments; Jun Ye and Florentin Smarandache analyzed the data; Surapati Pramanik and
Partha Pratim Dey wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest.

Appendix A
Proof of Theorem 2

(1) From the inequality stated in Theorem 1, we can easily obtain CB (M, N)w ě 0.
` ` ` ` ` `
(2) CB (M, N)w = 0 if, and only if, M = N, i.e., TM pxi q = TN pxi q, I M pxi q = IN pxi q, FM pxi q = FN pxi q,
´ ´ ´ ´ ´ ´
TM pxi q = TN pxi q, I M pxi q = IN pxi q, FM pxi q = FN pxi q @ x P U.
» d ` ˘ ` ˘ ¨ c `` ˘ b `` ˘ ˛ d ` ˘ ` ˘ ¨ c `` ˘ b `` ˘ ˛ fi
` ` T x ` T x ` ` I xi ` I xi ‹
T x `T x M i N i ‹ I x `I x M N
M i N i ´˚ ‹` M i N i ´˚ ‹`
˚ ˚
2 ˝ 2 ‚ 2 ˝ 2 ‚
— ffi
— ¨c
`` ˘ `` ˘
˛ d ¨c
`` ˘ `` ˘
˛ ffi
`` ˘ `` ˘
d´ b b
`` ˘ `` ˘
¯ ´ ¯
1´I x ` 1´I x ˚ 1´I M xi ` 1´I N xi ‹ F x `F x ˚ FM xi ` FN xi ‹
— M i N i

´˚ ‹` M i N i ´˚ ‹`
n
— 2 ˝ 2 ‚ 2 ˝ 2 ‚

ř — ffi
(3) CB (M, N)w = wi i — d
´` ˘ ´` ˘
¨ c´
´` ˘
¯ c´
´` ˘
¯˛ d ´
´` ˘ ´` ˘
¨ c´
´` ˘
¯ c´
´` ˘
¯˛ ffi
´T x ` ´T ´I x ` ´I
´ ¯ ¯
— ´ T x `T x x ´ I x `I x x ffi
i “1 M i N i M i N i ‹ M i N i M i N i ‹
´˚ ‹` ´˚ ‹`
˚ ˚
— 2 ˝ 2 ‚ 2 ˝ 2 ‚

— ffi
— ¨c ¨ c´ ¯ c´

´` ˘ ´` ˘
˛ d ¯˛
´` ˘ ´` ˘ ´` ˘ ´` ˘ ´` ˘ ´` ˘
d´ b
´F x ` ´F
¯ ´ ¯ ´ ¯
1`I x ` 1`I x ˚ 1`I M xi ` 1`I N xi ‹ ´ F x `F x x
– M i N i ‹

M i N i ´˚ ‹` M i N i ´˚
˚

2 ˝ 2 ‚ 2 ˝ 2 ‚

¨b c ˛ c ¨b c ˛
` ` ` `
c
T p x i q` T p x i q
N p iq
` ` ` ` x ` I p xi q
» fi
T p xi q`T pxi q
‚` IN pxi q`I M pxi q
I
N M M
N
2
M ´˝ 2 2 ´˝ 2
‚`
— ffi
¨b c ˛ c ¨b c ˛
` ` ` `
— c´ ffi
` `
N p iq
x ` 1´I p xi q ` ` F p x i q` F p x i q
N p iq
` 1´I p xi q
¯ ´ ¯
‚` FN pxi q`FM pxi q
— 1´I x 1´I
M N M

M
´˝ ´˝ ‚`
n
— 2 2 2 2

ř — ffi
= wi — c
´ ´
¨ c´
´
¯ c´
´
¯˛ c ´
´ ´
¨ c´
´
¯ c´
´
¯˛ ffi
´T pxi q ` ´T pxi q
N p iq
´ T pxi q`T pxi q ´ I pxi q`I pxi q ´I ` ´I pxi q
´ ¯ ¯
— x ffi
i “1 — N M
´˝
N M
‚` N M
´˝
M
‚` ffi
— 2 2 2 2 ffi
— ¨b c ˛ c ´ ¨ c´ ¯ c´ ¯˛

´ ´ ´ ´

´ ´ ´ ´ ´F pxi q ` ´F pxi q
N p iq
x ` 1`I p xi q
N p iq
` 1`I p xi q ´ F p xi q`F p xi q
¯ ´ ¯ ¯
1`I
– 1`I x

M N M
2
M
´˝ 2
‚` N
2
M
´˝ 2

= CB (N, M)w .

64
Axioms 2018, 7, 21

(4) CB (MC , NC )w =
¨c ¨ c´
`` ˘ `` ˘
˛ d ˛
`` ˘ `` ˘
» d ` ˘ ` ¯ b fi
`` ˘ `` ˘
b
` ` x ` 1´I x q
´ ¯ ´ ¯
x `F ˚ FM xi ` FN xi ‹ 1´I x ` 1´I x 1´I
˘
F x M i N i M i N i ‹
M i N i ´˚ ‹` ´˚ ‹`
˚
2 ˝ 2 ‚ 2 ˝ 2 ‚
— ffi
— ¨c
`` ˘
¯ c
`` ˘
¯˛ d ¨c
`` ˘ `` ˘
˛ ffi
`` ˘ `` ˘
d ´ ´ b
`` ˘ `` ˘
˚ 1´ 1´I M xi ` 1´ 1´I N xi ‹
´ ¯ ´ ¯
1´ 1´I x `1´ 1´I x T x `T x ˚ TM xi ` TN xi ‹
— M i N i

´˚ ‹` M i N i ´˚ ‹`
n
— 2 ˝ 2 ‚ 2 ˝ 2 ‚

ř — ffi
— ¨ c´ ¯ c´ ¨c ´ ¯ c ´ ffi
´` ˘ ´` ˘ ´` ˘ ´` ˘
¯˛ d ´ ¯˛
´` ˘ ´` ˘ ´` ˘ ´` ˘
d
´F x ` ´F ˚ ´ ´1´I M xi ` ´ ´1´I N xi ‹
´ ¯ ¯ ´ ¯
— ´ F x `F x x ´ ´1´I x ´ ´1´I x

i “1 — M i N i M i N i ‹ M i N i
´˚ ‹` ´˚ ‹`
˚
2 ˝ 2 ‚ 2 ˝ 2 ‚

— ffi
— ¨c ¯ c ¨ c´ ¯ c´

´` ˘ ´` ˘ ´` ˘ ´` ˘
¯˛ d ´ ¯˛
´` ˘ ´` ˘ ´` ˘ ´` ˘
d ´ ´
˚ 1` ´1´I M xi ` 1` ´1´I N xi ‹ ´T x ` ´T
´ ¯ ´ ¯ ¯
– 1` ´1´I x `1` ´1´I x ´ T x `T x x fl
M i N i M i N i M i N i ‹
´˚ ‹` ´˚
˚

2 ˝ 2 ‚ 2 ˝ 2 ‚

` ` ` `
c ˜b b c ˜b b
` ` ` `
» ¸ ¸ fi
TM pxi q`TN pxi q TM pxi q` TN pxi q I M pxi q` IN pxi q I M pxi q` I N p xi q
2 ´ 2 ` 2 ´ 2 `
` `

` ` ` `
¯ ´ ¯ ˜b b c ˜b b
` `
— ¸ ¸ ffi
1´ I M pxi q ` 1´ IN pxi q 1´ I M pxi q` 1´ IN pxi q FM pxi q` FN pxi q FM pxi q` FN pxi q
´ ` ´ `
— ffi
2 2 2 2
n
— ffi
— ¨ c´ ¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛ ffi
´ x ` ´ x ´ x ` ´ x
ř
= wi i — ´ x ´ ´ x ´
c c
´TM p iq ´TN p iq ´ IM p iq ´ IN p iq
´ ¯ ´ ¯
´ TM p i q`TN pxi q ´ IM p i q` IN pxi q

´˝ ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
i “1 — 2 2 ‚` 2 2 ffi
— ¨ c´ ¯ c´ ¯˛

´ ´ ´ x ´ ´ x ` ´ x

ă ą
c´ c
´ ´ ´ FM p iq ´ FN p iq
¯ ´ ¯ ˜b b ¸ ´ ¯
1` I M pxi q ` 1` IN pxi q ´ FM p i q` FN pxi q
– 1` I M pxi q` 1` IN pxi q

´ ` ´˝
˚ ‹
2 2 2 2 ‚

= CB (MC , NC )w . 

Appendix B

Proof of Theorem 4

(1) Obviously, we can easily get CIB (R, S)w ě 0.


(2) If CIB (R, S)w = 0 then R = S, and if infTR` pxi q = infTS` pxi q, supTR` pxi q = supTS` pxi q, infIR` pxi q
= infIS` pxi q, supIR` pxi q = supIS` pxi q, infFR` pxi q = infFS` pxi q, supFR` pxi q = supFS` pxi q, infTR´ pxi q
= infTS´ pxi q, supTR´ pxi q = supTS´ pxi q, infIR´ pxi q = infIS´ pxi q, supIR´ pxi q = supIS´ pxi q, infFR´ pxi q =
infFS´ pxi q, supFR´ pxi q = supFS´ pxi q @ x P U, then we obtain CIB (R, S)w = 0.
` px q` infT` px q ` `
¨b b ˛ c ¨b b ˛
` px q`infT` px q ` `
» c fi
supTR pxi q` supTS pxi q
‚` supTR pxi q`supTS pxi q ´ ˝
infTR i infTR i
´˝ ‚` ffi
i S i S

— 2 2 2 2 ffi
— ffi
` ` ` `
¨b b ˛ c ¨b b ˛
` ` ` `
— c ffi
infIR pxi q`infIS pxi q infIR pxi q` infIS pxi q supIR pxi q` supIS pxi q
‚` supIR pxi q`supIS pxi q ´ ˝
— ffi
´ ‚`
— ˝ ffi
2 2 2 2
— ffi
— ffi
— ffi
d´ ˛ d´
` ` ` px q ` 1´supI ` px q
— ffi
` `
¯ ´ ¯ ¨b b ¯ ´ ¯
1´infIR pxi q ` 1´infIS pxi q ´
— ffi
— 1´infIR pxi q` 1´infIS pxi q 1 supI R i S i ffi
´ ‚` ´
— ˝ ffi
— 2 2 2 ffi
— ffi
— ffi
` ` ` `
¨b b ˛ c ¨b b ˛
` `
— ffi
1´supIR pxi q` 1´supIS pxi q infFR pxi q`infFS pxi q infFR pxi q` infFS pxi q
— ffi
— ˝ ‚` ´˝ ‚` ffi

— 2 2 2 ffi

— ffi
˛ d ´
´ infTR´ pxi q`infTS` pxi q
— ffi
` `
¨b b ¯
` `
c
supFR pxi q` supFS pxi q
— ffi
supFR pxi q`supFS pxi q
— ffi

— 2 ´˝ 2
‚`
2 ´ ffi

— ffi
n —
— ¨ c´
´
¯ c´
´
¯˛ d ´
´ px q`supT` px q
¨ c´
´
¯ c´
´
¯˛


´infTR pxi q ` ´ ´ supTR pxi q ` ´
CIB (R, S)w = 12
infTS pxi q supTS pxi q
¯
´
ř
(3)

wi

— supTR i S i ffi
‹` ´ ‹`
— ˚ ‹ ˚ ‹ ffi
˚ ˚
— ˝ 2 ‚ 2 ˝ 2 ‚ ffi
i “1
— ffi
— ffi
— ¨ c´ ffi
¯ c´
´infIR´ pxi q ` ´infIS´ pxi q ‹
— d ´ ¯˛ d ´ ffi
´ infIR´ pxi q`infIS´ pxi q ´ supIR´ pxi q`supIS´ pxi q
— ¯ ¯ ffi
— ffi
´ ‹` ´
— ˚ ffi
˚

— 2 ˝ 2 ‚ 2 ffi

— ffi
— ¨ c´ ffi
¯ c´
´supIR´ pxi q ` ´supIS´ pxi q ‹
— ¯˛ d´ ffi
´ ´ ´ ´
¯ ´ ¯ ¨ b b ˛
1`infIR pxi q ` 1`infIS pxi q
— ffi
1`infIR pxi q` 1`infIS pxi q
— ffi
‹` ´ ‚`
— ˚ ffi
˚ ˝
2 2 2
— ˝ ‚ ffi
— ffi
— ffi
— d´ d ffi
´ ´ ´ ´
— ffi
´ ´
¯ ´ ¯ ¨b b ˛ ´ ¯
1`supIR pxi q ` 1`supIS pxi q 1`supIR pxi q` 1`supIS pxi q ´ infFR pxi q`infFS pxi q
— ffi
— ffi

— 2 ´˝ 2
‚`
2 ´ ffi

— ffi
— ¨ c´ ffi
¯ c´ ¨ c´ ¯ c´
´ ´ ´ ´
— ¯˛ d ´ ¯˛ ffi
´infFR pxi q ` ´infFS pxi q ‹ ´ supFR´ pxi q`supFS´ pxi q ´supFR pxi q ` ´supFS pxi q ‹
— ¯ ffi
— ffi
‹` ´
— ˚ ˚ ffi
˚ ˚ ‹
– ˝ 2 ‚ 2 ˝ 2 ‚ fl

65
ă ą
Axioms 2018, 7, 21

¨c ˛ d ¨c ˛
`` ˘ `` ˘ `` ˘ `` ˘
b b
`` ˘ `` ˘ `` ˘ `` ˘
» d fi
infT xi `infT xi ˚ infTS xi ` infTR xi ‹ supT xi `supT xi ˚ supTS xi ` supTR xi ‹
— S R ´˚ ‹` S R ´˚ ‹` ffi

— 2 ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨c ˛ d ¨c ˛ ffi
`` ˘ `` ˘ `` ˘ `` ˘
— b b ffi
`` ˘ `` ˘ `` ˘ `` ˘
d
infI x ` infI x supI x ` supI x
— ffi
infI xi `infI x S i R i ‹ supI xi `supI xi S i R i ‹
R i ´˚
— ffi
S S R
‚` ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
˚

— 2 ˝ 2 2 2 ffi

— ffi
— ¨c ˛ d ffi
`` ˘ `` ˘
— ffi
`` ˘ `` ˘ `` ˘ `` ˘
d´ ¯ ´ ¯ b ´ ¯ ´ ¯
xi ` 1´infI ` xi ` 1´supI
— ffi
1´infI x 1´infI x i 1´infI x 1´supI xi
R i S R i ‹
— ffi
S ´˝ S R
‚` ´
— ˚ ‹ ffi
˚
2 2 2
— ffi
— ffi
— ffi
— ffi
¨c ˛ d ¨c ˛
` ` ` `
— b b ffi
` r1´supI s `` ˘ `` ˘ `
— ` ˘ ` ˘ ` ˘ ` ˘ ffi
1´supI x i x infF xi `infF xi infF x infF x
R i ‹ i i ‹
— ffi
S ‹` S R S R
´ ‹`
— ˚ ˚ ffi
˚ ˚
2 2 2
— ffi
— ˝ ‚ ˝ ‚ ffi
— ffi
— ffi
— ¨c ˛ d ffi
` `
b
`` ˘ `` ˘ ´` ˘ ´` ˘
d ´ ¯
`
— ` ˘ ` ˘ ffi

ă ą

supF xi `supF xi supF
S
x i supF
R
x i ‹ ´ infT x i `infT x i


S R ´
˚
‹` S R ´

n —
— 2
˚
˝ 2 ‚ 2


1 ř
=
— ffi

2 wi —
— ¨ c´
´
¯ c´
´
¯˛ d ´ ¨ c´
´
¯ c´
´
¯˛


´` ˘ ´` ˘
— ffi
´infT xi ` ´infT ´supT xi ` ´supT
` ˘ ` ˘ ¯ ` ˘ ` ˘
xi ‹ ´ `supT xi ‹
i “1 supT x x
— ffi
— S R ‹` S i R i S R ffi
´ ‹`
˚ ˚
— ˚ ˚ ffi

— ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨ c´ ffi
¯ c´
´ ´
¯˛ d ´
´ ´ ´ ´
— d ´ ffi
´infI xi ` ´infI
¯ ` ˘ ` ˘ ¯
´ infI xi `infI xi ‹ ´ `supI
` ˘ ` ˘ ` ˘ ` ˘
xi supI x x
— ffi

S R S R S i R i ´

´ ‹`
— ˚ ffi
˚

— 2 ˝ 2 ‚ 2 ffi

— ffi
— c c c ffi
´` ˘ ´` ˘
¨ ´ ¯˛ d´ ¨ ˛
´ ´ ´ ´
— ¯ ´ b ffi
´supI xi ` ´supI
¯ ´ ¯
xi ` 1`infI
` ˘ ` ˘
xi ‹ xi ` 1`infI 1`infI xi ‹
— ` ˘ ` ˘ ffi
S R 1`infI xi S R
S R
— ffi
‹` ´ ‹`
— ˚ ˚ ffi
˚ ˚

— ˝ 2 ‚ 2 ˝ 2 ‚ ffi

— ffi
— ¨ c ˛ ffi
´ ´
— ffi
´ ´ ´ ´
d´ ¯ ´ ¯ b d ´ ¯
xi ` r1`supI xi s ‹
` ˘ ` ˘
xi ` 1`supI 1`supI ´ infF xi `infF
— ` ˘ ` ˘ ` ˘ ` ˘ ffi
— 1`supI xi S R xi ffi
— S R ´
˚
˚ ‹` S R ´

2 2 2
— ˝ ‚ ffi
— ffi
— ffi
— ¨ c´ c c c ffi
´` ˘ ´` ˘ ´ ´
— ˛ ¨ ˛ ffi
´ ´
¯ ´ ¯ d ´ ´ ` ˘¯ ´ ` ˘¯
´infF xi ` ´infF ´supF xi ` ´supF
¯
xi ‹ xi ‹
— ffi
´ supF xi `supF
` ˘ ` ˘

S R xi S R

— ˚
˚ ‹` S R ´
˚
˚ ‹ ffi
2 2 2
– ˝ ‚ ˝ ‚ fl

= CIB (S, R)w .


(4) CIB (RC , SC )w =
` px q` infF` px q ` px q` supF` px q
» c ¨b b ˛ c ¨b b ˛
` px q`infF` px q ` ` px q

‚` supF px q`
infFR supF
´˝ ´˝ ‚`
infFR i supF S i R i S i
i S i R i S i
— 2 2 2 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ c´ ¨ c´ ¯ c´ ¯˛
´infI` px q ` ` px q ` px q ` ` px q
— ffi
´infI` px q ` 1´infI` px q 1´infI ` px q ` 1´supI` pxq 1´supI 1´supI
c´ ¯ ´ ¯ ¯ ´ ¯
1´supI
— ffi
1 R i S i R i S i
— 1 R i S i
ffi R i S

´ ‹` ´˚ ‹`
— ˚ ‹ ˚ ffi ‹ i
— 2
˚ 2 2 ffi 2
— ˝ ‚ ˝ ffi ‚
— ffi
— ffi
¨c ´ ¯ c ´ ¯˛ c ´
` px q ` 1´ 1´infI` px q
— ffi
` px q `1´ 1´infI` px q 1´ 1´infI ` px q `1´ 1´supI` px q
c ´ ¯ ´ ¯ ¯ ´ ¯
1´ 1´infI ´ ´
— ffi
— 1 1 supI R i ffi S i
R i S i R i S i

´˚ ‹` ´
— ˚ ‹ ffi
2 2 2
— ffi
— ˝ ‚ ffi
— ffi
— ffi
— ¨c ´ ¯ c ´ ¯˛ c ffi
` `
˚ 1´ 1´supI px q ` 1´ 1´supI px q ‹ ` px q` infT` px q ` px q` supT` px q
— ¨ b b ˛ c ¨ b b ˛ ffi
` ` ` `
‹ ` infT px q` ‚` supT px q`
i i
px q ´ ˝ px q ´ ˝
R S
— ffi
— infT infT supT supT R i S i R i S i
i i i i
R S
ffi R S
— ˚ 2 2 2 2 2
‚ ffi
— ˝ ‚ ffi
— ffi
— ffi
— ¨ ´ c c ¯˛ c ´ ¨ ´c c ¯˛ ffi
n ´infF´ px q ` ´infF´ px q ‹ ´supF´ px q ` ´supF´ px q ‹
¯ ´ ¯ ´
´ infF´ px q`infF´ px q ´ supF´ px q`supF´ px q
— c ´ ¯ ¯ ffi
i i i i
— ffi
1
R S R S
ř R i S i R i S i

` ´˚ ‹` ´ ‹`
wi
— ˚ ˚ ffi
— 2 2 2
˚ 2

2
— ˝ ‚ ˝ ‚ ffi
— ffi
i “1
— ffi
— ¨ c c ˛ ffi
´ ´
´ ¯ ´ ¯
´ ´1´infI´ px q ` ´1´infI´ px q ˚ ´ ´1´infI px q ` ´ ´1´infI px q ‹ ´ ´1´supI´ px q ` ´1´supI´ px q
— c ´´ ¯ ´ ¯¯ c ´´ ¯ ´ ¯¯ ffi
— R
ffi i S i
R i S i R i S i

´˚ ‹` ´
— ffi
— 2 2 2

— ˝ ‚ ffi
— ffi
— ffi
— ¨c ´ c ˛ ¨ c c ˛ ffi
´ ´ ´ ´
¯ ´ ¯ ´ ¯ ´ ¯
´ px q `1` ´1´infI´ px q
˚ ´ ´1´supI px q ` ´ ´1´supI px q ‹ ˚ 1` ´1´infI px q ` 1` ´1´infI px q ‹
— c ´ ¯ ´ ¯ ffi
1` ´1´infI
— R i S i
ffi R i S i
R i S i

‹` ´˚ ‹`
— ffi
— ˚ ffi
— 2 2 2 ffi
— ˝ ‚ ˝ ‚ ffi
— ffi
— ¨ c ˙ c ffi
´ ´
´ ´ ¯ ˛
´ ´ ´ ´
— ffi
1` ´1´supI px q ` 1` ´1´supI px q ‹
c ´ ¯ ´ ¯ c ´ ¯
1` ´1´supI px q `1` ´1´supI px q ´ infT px q`infT px q
— R
ffi i S i
— R i S i ffi R i S i

´˚ ‹` ´
— ˚ ffi
— 2 2 2 ffi
— ˝ ‚ ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ c ´ ¨ c´ c´ ˛
´ px q ` ´ px q ´ px q ` ´ px q
— ¯ ¯ ffi
´ ´ ´ ´ ´ ´
¯
´ supT px q`supT px q
— infT infT supT supT

S i R i S i R i
i i
— S R

‹` ´˚
— ˚ ‹ ˚ ‹ ffi
– ˚ 2 2 2
‹ fl
˝ ‚ ˝ ‚

66
ă ą
Axioms 2018, 7, 21

¨b c ˛ c ¨b c ˛
` ` ` `
» c fi
` ` infT pxi q` infT pxi q ` ` supT pxi q` supT pxi q
infT pxi q`infT pxi q R S supT pxi q`supT pxi q R S
— R S ´˝
˚ ‹ R S ´˝
˚
‚`
‹ ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ ¨b c ˛ ffi
` ` ` `
— ffi
` ` ` `
c c
R p iq
x ` infI pxi q
R p iq
x ` supI pxi q
R p iq
x `infI p xi q
R p iq
infI x `supI pxi q supI
— ffi
infI S supI S
S ´˝ S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ` `

R p iq
x ` 1´infI pxi q
¯ ´ ¯ ´ ¯ ´ ¯
R p iq
` 1´infI pxi q 1´infI 1´supI p xi q ` 1´supI pxi q
— ffi
1´infI x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨b c ˛ c ¨b c ˛ ffi
` ` ` `
— ffi
` `
R p iq
1´supI x ` 1´supI pxi q infF pxi q`infF pxi q infF pxi q` infF pxi q
— ffi
S R R S
S ´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
` `
— ffi
` ` ´ ´
c
supF p xi q` supF p xi q
´ ¯
supF p xi q`supF p xi q ´ infT pxi q`infT pxi q
— ffi
R R S R S
S ´˝ ´
— ˚ ‹ ffi
n
— ffi
— 2 2 ‚` 2 ffi
1 ř
= wi
— ffi
— ffi
2
¨ c´ ¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
´ ´ ´ ´
— ffi
´ ´
c ´
´infT pxi q ` ´infT pxi q ´supT pxi q ` ´supT pxi q
¯
´ supT pxi q`supT p xi q
— ffi
i “1 —
— ˚ R S ‹ R S
´˝
˚ R S
‚`
‹ ffi

— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛
´ ´
— ffi
´ ´ ´ ´
c c ´
R p iq
´infI ` ´infI pxi q
´ ¯ ¯
´ infI pxi q`infI pxi q x ´ supI pxi q`supI pxi q
— ffi
R S S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨b c ˛
´ ´ ´ ´
— ffi
´ ´

R p iq
´supI pxi q
R p iq
´supI ` x ` 1`infI pxi q
¯ ´ ¯
x 1`infI pxi q ` 1`infI pxi q 1`infI
— ffi
S R S S
´˝ ‚`
— ˚ ‹ ˚ ‹ ffi
— ffi
— ˝ 2 ‚` 2 2 ffi
— ffi
— ¨b c ˛ c ffi
´ ´
— ffi
´ ´ ´ ´

R p iq
x ` 1`supI pxi q
¯ ´ ¯ ´ ¯
R p iq
` 1`supI p xi q 1`supI ´ infF pxi q`infF pxi q
— ffi
1`supI x S
S R S
´˝ ´
— ˚ ‹ ffi
— ffi
— 2 2 ‚` 2 ffi
— ffi
— ¨ c´ ffi
¯ c´ ¯˛ ¨ c´ ¯ c´ ¯˛
´ ´ ´ ´
— ffi
´ ´
c ´
´infF pxi q ` ´infF pxi q ´supF pxi q ` ´supF p xi q
¯
´ supF pxi q`supF pxi q
— ffi
R S R S R S
´˝
— ˚ ‹ ˚ ‹ ffi
– fl
˝ 2 ‚` 2 2 ‚

= CIB (R, S)w .

This completes the proof. 

References
1. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communications; The University of Illinois Press:
Urbana, IL, USA, 1949.
2. Shannon, C.E. A mathematical theory of communications. Bell Syst. Tech. J. 1948, 27, 379–423. [CrossRef]
3. Criado, F.; Gachechiladze, T. Entropy of fuzzy events. Fuzzy Sets Syst. 1997, 88, 99–106. [CrossRef]
4. Herencia, J.; Lamta, M. Entropy measure associated with fuzzy basic probability assignment. In Proceedings
of the IEEE International Conference on Fuzzy Systems, Barcelona, Spain, 5 July 1997; Volume 2, pp. 863–868.
5. Rudas, I.; Kaynak, M. Entropy basedoperations on fuzzy sets. IEEE Trans. Fuzzy Syst. 1998, 6, 33–39.
[CrossRef]
6. Zadeh, L.A. Probality measures of fuzzy events. J. Math. Anal. Appl. 1968, 23, 421–427. [CrossRef]
7. Luca, A.D.; Termini, S. A definition of non-probabilistic entropy in the setting of fuzzy set theory. Inf. Control
1972, 20, 301–312. [CrossRef]
8. Sander, W. On measure of fuzziness. Fuzzy Sets Syst. 1989, 29, 49–55. [CrossRef]
9. Xie, W.; Bedrosian, S. An information measure for fuzzy sets. IEEE Trans. Syst. Man Cybern. 1984, 14, 151–156.
[CrossRef]
10. Pal, N.; Pal, S. Higher order fuzzy entropy and hybridentropy of a fuzzy set. Inf. Sci. 1992, 61, 211–221.
[CrossRef]
11. Kaufmann, A.; Gupta, M. Introduction of Fuzzy Arithmetic: Theory and Applications; Van Nostrand Reinhold
Co.: New York, NY, USA, 1985.
12. Yager, R. On the measure of fuzziness and negation. Part I: Membership in the unit interval. Int. J. Gen. Syst.
1979, 5, 221–229. [CrossRef]
13. Yager, R. On the measure of fuzziness and negation. Part II: Lattice. Inf. Control 1980, 44, 236–260. [CrossRef]
14. Kosko, B. Fuzzy entropy and conditioning. Inf. Sci. 1986, 40, 165–174. [CrossRef]
15. Kosko, B. Concepts of fuzzy information measure on continuous domains. Int. J. Gen. Syst. 1990, 17, 211–240.
[CrossRef]
16. Prakash, O.; Sharma, P.K.; Mahajan, R. New measures of weighted fuzzy entropy and their applications for
the study of maximum weighted fuzzy entropy principle. Inf. Sci. 2008, 178, 2389–2395. [CrossRef]

67
Axioms 2018, 7, 21

17. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval–valued fuzzy sets. Fuzzy Sets Syst.
1996, 78, 305–316. [CrossRef]
18. Szmidt, E.; Kacprzyk, J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets Syst. 2001, 118, 467–477. [CrossRef]
19. Wei, C.P.; Wang, P.; Zhang, Y.Z. Entropy, similarity measure of interval–valued intuitionistic fuzzy sets and
their applications. Inf. Sci. 2011, 181, 4273–4286. [CrossRef]
20. Li, X.Y. Interval–valued intuitionistic fuzzy continuous cross entropy and its application in multi-attribute
decision-making. Comput. Eng. Appl. 2013, 49, 234–237.
21. Shang, X.G.; Jiang, W.S. A note on fuzzy information measures. Pattern Recogit. Lett. 1997, 18, 425–432.
[CrossRef]
22. Vlachos, I.K.; Sergiadis, G.D. Intuitionistic fuzzy information applications to pattern recognition. Pattern
Recogit. Lett. 2007, 28, 197–206. [CrossRef]
23. Ye, J. Fuzzy cross entropy of interval–valued intuitionistic fuzzy sets and its optimal decision-making method
based on the weights of the alternatives. Expert Syst. Appl. 2011, 38, 6179–6183. [CrossRef]
24. Xia, M.M.; Xu, Z.S. Entropy/cross entropy–based group decision making under intuitionistic fuzzy sets.
Inf. Fusion 2012, 13, 31–47. [CrossRef]
25. Tong, X.; Yu, L.A. A novel MADM approach based on fuzzy cross entropy with interval-valued intuitionistic
fuzzy sets. Math. Probl. Eng. 2015, 2015, 1–9. [CrossRef]
26. Smarandache, F. A Unifying Field of Logics. Neutrosophy: Neutrosophic Probability, Set and Logic; American
Research Press: Rehoboth, DE, USA, 1998.
27. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [CrossRef]
28. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [CrossRef]
29. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Multispace Multistruct.
2010, 4, 410–413.
30. Pramanik, S.; Biswas, P.; Giri, B.C. Hybrid vector similarity measures and their applications to multi-attribute
decision making under neutrosophic environment. Neural Comput. Appl. 2017, 28, 1163–1176. [CrossRef]
31. Biswas, P.; Pramanik, S.; Giri, B.C. Entropy based grey relational analysis method for multi-attribute decision
making under single valued neutrosophic assessments. Neutrosophic Sets Syst. 2014, 2, 102–110.
32. Biswas, P.; Pramanik, S.; Giri, B.C. A new methodology for neutrosophic multi-attribute decision making
with unknown weight information. Neutrosophic Sets Syst. 2014, 3, 42–52.
33. Biswas, P.; Pramanik, S.; Giri, B.C. TOPSIS method for multi-attribute group decision-making under single
valued neutrosophic environment. Neural Comput. Appl. 2015. [CrossRef]
34. Biswas, P.; Pramanik, S.; Giri, B.C. Aggregation of triangular fuzzy neutrosophic set information and its
application to multi-attribute decision making. Neutrosophic Sets Syst. 2016, 12, 20–40.
35. Biswas, P.; Pramanik, S.; Giri, B.C. Value and ambiguity index based ranking method of single-valued trapezoidal
neutrosophic numbers and its application to multi-attribute decision making. Neutrosophic Sets Syst. 2016, 12,
127–138.
36. Biswas, P.; Pramanik, S.; Giri, B.C. Multi-attribute group decision making based on expected value of
neutrosophic trapezoidal numbers. In New Trends in Neutrosophic Theory and Applications; Smarandache, F.,
Pramanik, S., Eds.; Pons Editions: Brussels, Belgium, 2017; Volume II, in press.
37. Biswas, P.; Pramanik, S.; Giri, B.C. Non-linear programming approach for single-valued neutrosophic TOPSIS
method. New Math. Nat. Comput. 2017, in press.
38. Deli, I.; Subas, Y. A ranking method of single valued neutrosophic numbers and its applications to
multi-attribute decision making problems. Int. J. Mach. Learn. Cybern. 2016. [CrossRef]
39. Ji, P.; Wang, J.Q.; Zhang, H.Y. Frank prioritized Bonferroni mean operator with single-valued neutrosophic
sets and its application in selecting third-party logistics providers. Neural Comput. Appl. 2016. [CrossRef]
40. Kharal, A. A neutrosophic multi-criteria decision making method. New Math. Nat. Comput. 2014, 10, 143–162.
[CrossRef]
41. Liang, R.X.; Wang, J.Q.; Li, L. Multi-criteria group decision making method based on interdependent inputs
of single valued trapezoidal neutrosophic information. Neural Comput. Appl. 2016. [CrossRef]
42. Liang, R.X.; Wang, J.Q.; Zhang, H.Y. A multi-criteria decision-making method based on single-valued
trapezoidal neutrosophic preference relations with complete weight information. Neural Comput. Appl. 2017.
[CrossRef]

68
Axioms 2018, 7, 21

43. Liu, P.; Chu, Y.; Li, Y.; Chen, Y. Some generalized neutrosophic number Hamacher aggregation operators
and their application to group decision making. Int. J. Fuzzy Syst. 2014, 16, 242–255.
44. Liu, P.D.; Li, H.G. Multiple attribute decision-making method based on some normal neutrosophic Bonferroni
mean operators. Neural Comput. Appl. 2017, 28, 179–194. [CrossRef]
45. Liu, P.; Wang, Y. Multiple attribute decision-making method based on single-valued neutrosophic normalized
weighted Bonferroni mean. Neural Comput. Appl. 2014, 25, 2001–2010. [CrossRef]
46. Peng, J.J.; Wang, J.Q.; Wang, J.; Zhang, H.Y.; Chen, X.H. Simplified neutrosophic sets and their applications
in multi-criteria group decision-making problems. Int. J. Syst. Sci. 2016, 47, 2342–2358. [CrossRef]
47. Peng, J.; Wang, J.; Zhang, H.; Chen, X. An outranking approach for multi-criteria decision-making problems
with simplified neutrosophic sets. Appl. Soft Comput. 2014, 25, 336–346. [CrossRef]
48. Pramanik, S.; Banerjee, D.; Giri, B.C. Multi–criteria group decision making model in neutrosophic refined set
and its application. Glob. J. Eng. Sci. Res. Manag. 2016, 3, 12–18.
49. Pramanik, S.; Dalapati, S.; Roy, T.K. Logistics center location selection approach based on neutrosophic
multi-criteria decision making. In New Trends in Neutrosophic Theory and Applications; Smarandache, F.,
Pramanik, S., Eds.; Pons Asbl: Brussels, Belgium, 2016; Volume 1, pp. 161–174, ISBN 978-1-59973-498-9.
50. Sahin, R.; Karabacak, M. A multi attribute decision making method based on inclusion measure for interval
neutrosophic sets. Int. J. Eng. Appl. Sci. 2014, 2, 13–15.
51. Sahin, R.; Kucuk, A. Subsethood measure for single valued neutrosophic sets. J. Intell. Fuzzy Syst. 2014.
[CrossRef]
52. Sahin, R.; Liu, P. Maximizing deviation method for neutrosophic multiple attribute decision making with
incomplete weight information. Neural Comput. Appl. 2016, 27, 2017–2029. [CrossRef]
53. Sodenkamp, M. Models, Strategies and Applications of Group Multiple-Criteria Decision Analysis in
Complex and Uncertain Systems. Ph.D. Thesis, University of Paderborn, Paderborn, Germany, 2013.
54. Ye, J. Multicriteria decision-making method using the correlation coefficient under single-valued
neutrosophic environment. Int. J. Gen. Syst. 2013, 42, 386–394. [CrossRef]
55. Ye, J. Another form of correlation coefficient between single valued neutrosophic sets and its multiple
attribute decision making method. Neutrosophic Sets Syst. 2013, 1, 8–12.
56. Ye, J. A multi criteria decision-making method using aggregation operators for simplified neutrosophic sets.
J. Intell. Fuzzy Syst. 2014, 26, 2459–2466.
57. Ye, J. Trapezoidal neutrosophic set and its application to multiple attribute decision-making. Neural Comput. Appl.
2015, 26, 1157–1166. [CrossRef]
58. Ye, J. Bidirectional projection method for multiple attribute group decision making with neutrosophic
number. Neural Comput. Appl. 2015. [CrossRef]
59. Ye, J. Projection and bidirectional projection measures of single valued neutrosophic sets and their
decision—Making method for mechanical design scheme. J. Exp. Theor. Artif. Intell. 2016. [CrossRef]
60. Nancy, G.H. Novel single-valued neutrosophic decision making operators under Frank norm operations
and its application. Int. J. Uncertain. Quant. 2016, 6, 361–375. [CrossRef]
61. Nancy, G.H. Some new biparametric distance measures on single-valued neutrosophic sets with applications
to pattern recognition and medical diagnosis. Information 2017, 8, 162. [CrossRef]
62. Pramanik, S.; Roy, T.K. Neutrosophic game theoretic approach to Indo-Pak conflict over Jammu-Kashmir.
Neutrosophic Sets Syst. 2014, 2, 82–101.
63. Mondal, K.; Pramanik, S. Multi-criteria group decision making approach for teacher recruitment in higher
education under simplified Neutrosophic environment. Neutrosophic Sets Syst. 2014, 6, 28–34.
64. Mondal, K.; Pramanik, S. Neutrosophic decision making model of school choice. Neutrosophic Sets Syst. 2015,
7, 62–68.
65. Cheng, H.D.; Guo, Y. A new neutrosophic approach to image thresholding. New Math. Nat. Comput. 2008, 4,
291–308. [CrossRef]
66. Guo, Y.; Cheng, H.D. New neutrosophic approach to image segmentation. Pattern Recogit. 2009, 42, 587–595.
[CrossRef]
67. Guo, Y.; Sengur, A.; Ye, J. A novel image thresholding algorithm based on neutrosophic similarity score.
Measurement 2014, 58, 175–186. [CrossRef]
68. Ye, J. Single valued neutrosophic minimum spanning tree and its clustering method. J. Intell. Syst. 2014, 23,
311–324. [CrossRef]

69
Axioms 2018, 7, 21

69. Ye, J. Clustering strategies using distance-based similarity measures of single-valued neutrosophic sets.
J. Intell. Syst. 2014, 23, 379–389.
70. Mondal, K.; Pramanik, S. A study on problems of Hijras in West Bengal based on neutrosophic cognitive
maps. Neutrosophic Sets Syst. 2014, 5, 21–26.
71. Pramanik, S.; Chakrabarti, S. A study on problems of construction workers in West Bengal based on
neutrosophic cognitive maps. Int. J. Innov. Res. Sci. Eng. Technol. 2013, 2, 6387–6394.
72. Majumdar, P.; Samanta, S.K. On similarity and entropy of neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26,
1245–1252.
73. Ye, J. Single valued neutrosophic cross-entropy for multi criteria decision making problems. Appl. Math. Model.
2014, 38, 1170–1175. [CrossRef]
74. Ye, J. Improved cross entropy measures of single valued neutrosophic sets and interval neutrosophic sets
and their multi criteria decision making strategies. Cybern. Inf. Technol. 2015, 15, 13–26. [CrossRef]
75. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Interval Neutrosophic Sets and Logic: Theory and
Applications in Computing; Hexis: Phoenix, AZ, USA, 2005.
76. Pramanik, S.; Dalapati, S.; Alam, S.; Smarandache, F.; Roy, T.K. NS-cross entropy-based MAGDM under
single-valued neutrosophic set environment. Information 2018, 9, 37. [CrossRef]
77. Sahin, R. Cross-entropy measure on interval neutrosophic sets and its applications in multi criteria decision
making. Neural Comput. Appl. 2015. [CrossRef]
78. Tian, Z.P.; Zhang, H.Y.; Wang, J.; Wang, J.Q.; Chen, X.H. Multi-criteria decision-making method based on a
cross-entropy with interval neutrosophic sets. Int. J. Syst. Sci. 2015. [CrossRef]
79. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications; Springer: New York, NY,
USA, 1981.
80. Dalapati, S.; Pramanik, S.; Alam, S.; Smarandache, S.; Roy, T.K. IN-cross entropy based magdm strategy
under interval neutrosophic set environment. Neutrosophic Sets Syst. 2017, 18, 43–57. [CrossRef]
81. Deli, I.; Ali, M.; Smarandache, F. Bipolar neutrosophic sets and their application based on multi-criteria
decision making problems. In Proceedings of the 2015 International Conference on Advanced Mechatronic
Systems (ICAMechS), Beijing, China, 22–24 August 2015; pp. 249–254.
82. Zhang, W.R. Bipolar fuzzy sets. In Proceedings of the IEEE World Congress on Computational Science
(FuzzIEEE), Anchorage, AK, USA, 4–9 May 1998; pp. 835–840. [CrossRef]
83. Zhang, W.R. Bipolar fuzzy sets and relations: A computational framework for cognitive modeling and
multiagent decision analysis. In Proceedings of the IEEE Industrial Fuzzy Control and Intelligent Systems
Conference, and the NASA Joint Technology Workshop on Neural Networks and Fuzzy Logic, Fuzzy
Information Processing Society Biannual Conference, San Antonio, TX, USA, 18–21 December 1994;
pp. 305–309. [CrossRef]
84. Deli, I.; Subas, Y.A. Multiple criteria decision making method on single valued bipolar neutrosophic set based
on correlation coefficient similarity measure. In Proceedings of the International Conference on Mathematics
and Mathematics Education (ICMME-2016), Elazg, Turkey, 12–14 May 2016.
85. Şahin, M.; Deli, I.; Uluçay, V. Jaccard vector similarity measure of bipolar neutrosophic set based on
multi-criteria decision making. In Proceedings of the International Conference on Natural Science and
Engineering (ICNASE’16), Killis, Turkey, 19–20 March 2016.
86. Uluçay, V.; Deli, I.; Şahin, M. Similarity measures of bipolar neutrosophic sets and their application to
multiple criteria decision making. Neural Comput. Appl. 2016. [CrossRef]
87. Dey, P.P.; Pramanik, S.; Giri, B.C. TOPSIS for solving multi-attribute decision making problems under
bi-polar neutrosophic environment. In New Trends in Neutrosophic Theory and Applications; Smarandache, F.,
Pramanik, S., Eds.; Pons Asbl: Brussells, Belgium, 2016; pp. 65–77, ISBN 978-1-59973-498-9.
88. Pramanik, S.; Dey, P.P.; Giri, B.C.; Smarandache, F. Bipolar neutrosophic projection based models for solving
multi-attribute decision making problems. Neutrosophic Sets Syst. 2017, 15, 74–83. [CrossRef]
89. Wang, L.; Zhang, H.; Wang, J. Frank Choquet Bonferroni operators of bipolar neutrosophic sets and their
applications to multi-criteria decision-making problems. Int. J. Fuzzy Syst. 2017. [CrossRef]
90. Pramanik, S.; Dalapati, S.; Alam, S.; Smarandache, F.; Roy, T.K. TODIM Method for Group Decision
Making under Bipolar Neutrosophic Set Environment. In New Trends in Neutrosophic Theory and Applications;
Smarandache, F., Pramanik, S., Eds.; Pons Editions: Brussels, Belgium, 2016; Volume II, in press.

70
Axioms 2018, 7, 21

91. Mahmood, T.; Ye, J.; Khan, Q. Bipolar Interval Neutrosophic Set and Its Application in Multicriteria Decision
Making. Available online: https://archive.org/details/BipolarIntervalNeutrosophicSet (accessed on
9 October 2017).
92. Deli, I.; Şubas, , Y.; Smarandache, F.; Ali, M. Interval Valued Bipolar Neutrosophic Sets and Their Application in
Pattern Recognition. Available online: https://www.researchgate.net/publication/289587637 (accessed on
9 October 2017).
93. Ye, J. Similarity measures between interval neutrosophic sets and their applications in multicriteria
decision-making. J. Intell. Fuzzy Syst. 2014, 26, 165–172. [CrossRef]
94. Garg, H. Non-linear programming method for multi-criteria decision making problems under interval
neutrosophic set environment. Appl. Intell. 2017, 1–15. [CrossRef]
95. Pramanik, S.; Mukhopadhyaya, D. Grey relational analysis-based intuitionistic fuzzy multi-criteria group
decision-making approach for teacher selection in higher education. Int. J. Comput. Appl. 2011, 34, 21–29.
[CrossRef]
96. Mondal, K.; Pramanik, S. Intuitionistic fuzzy multi criteria group decision making approach to quality-brick
selection problem. J. Appl. Quant. Methods 2014, 9, 35–50.
97. Dey, P.P.; Pramanik, S.; Giri, B.C. Multi-criteria group decision making in intuitionistic fuzzy environment
based on grey relational analysis for weaver selection in Khadi institution. J. Appl. Quant. Methods 2015, 10,
1–14.
98. Dey, P.P.; Pramanik, S.; Giri, B.C. An extended grey relational analysis based interval neutrosophic
multi-attribute decision making for weaver selection. J. New Theory 2015, 9, 82–93.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

71
axioms
Article
Multi-Attribute Decision-Making Method Based on
Neutrosophic Soft Rough Information
Muhammad Akram 1, *, Sundas Shahzadi 1 and Florentin Smarandache 2
1 Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan;
[email protected]
2 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
* Correspondence: [email protected]

Received: 17 February 2018; Accepted: 19 March 2018; Published: 20 March 2018

Abstract: Soft sets (SSs), neutrosophic sets (NSs), and rough sets (RSs) are different mathematical
models for handling uncertainties, but they are mutually related. In this research paper, we introduce
the notions of soft rough neutrosophic sets (SRNSs) and neutrosophic soft rough sets (NSRSs) as
hybrid models for soft computing. We describe a mathematical approach to handle decision-making
problems in view of NSRSs. We also present an efficient algorithm of our proposed hybrid model to
solve decision-making problems.

Keywords: soft rough neutrosophic sets; neutrosophic soft rough sets; decision-making; algorithm

MSC: 03E72; 68R10; 68R05

1. Introduction
Smarandache [1] initiated the concept of neutrosophic set (NS). Smarandache’s NS is characterized
by three parts: truth, indeterminacy, and falsity. Truth, indeterminacy and falsity membership values
behave independently and deal with the problems of having uncertain, indeterminant and imprecise
data. Wang et al. [2] gave a new concept of single valued neutrosophic set (SVNS) and defined the set
of theoretic operators in an instance of NS called SVNS. Ye [3–5] studied the correlation coefficient and
improved correlation coefficient of NSs, and also determined that, in NSs, the cosine similarity measure
is a special case of the correlation coefficient. Peng et al. [6] discussed the operations of simplified
neutrosophic numbers and introduced an outranking idea of simplified neutrosophic numbers.
Molodtsov [7] introduced the notion of soft set as a novel mathematical approach for handling
uncertainties. Molodtsov’s soft sets give us new technique for dealing with uncertainty from the
viewpoint of parameters. Maji et al. [8–10] introduced neutrosophic soft sets (NSSs), intuitionistic
fuzzy soft sets (IFSSs) and fuzzy soft sets (FSSs). Babitha and Sunil gave the idea of soft set relations [11].
In [12], Sahin and Kucuk presented NSS in the form of neutrosophic relation.
Rough set theory was initiated by Pawlak [13] in 1982. Rough set theory is used to study the
intelligence systems containing incomplete, uncertain or inexact information. The lower and upper
approximation operators of RSs are used for managing hidden information in a system. Therefore,
many hybrid models have been built such as soft rough sets (SRSs), rough fuzzy sets (RFSs),
fuzzy rough sets (FRSs), soft fuzzy rough sets (SFRSs), soft rough fuzzy sets (SRFSs), intuitionistic
fuzzy soft rough sets (IFSRS), neutrosophic rough sets (NRSs), and rough neutrosophic sets (RNSs) for
handling uncertainty and incomplete information effectively. Soft set theory and RS theory are two
different mathematical tools to deal with uncertainty. Evidently, there is no direct relationship between
these two mathematical tools, but efforts have been made to define some kind of relation [14,15].
Feng et al. [15] took a significant step to introduce parametrization tools in RSs. They introduced SRSs,

Axioms 2018, 7, 19; doi:10.3390/axioms7010019 72 www.mdpi.com/journal/axioms


Axioms 2018, 7, 19

in which parameterized subsets of universal sets are elementary building blocks for approximation
operators of a subset. Shabir et al. [16] introduced another approach to study roughness through SSs,
and this approach is known as modified SRSs (MSR-sets). In MSR-sets, some results proved to be
valid that failed to hold in SRSs. Feng et al. [17] introduced a modification of Pawlak approximation
space known as soft approximation space (SAS) in which SAS SRSs were proposed. Moreover, they
introduced soft rough fuzzy approximation operators in SAS and initiated a idea of SRFSs, which is an
extension of RFSs introduced by Dubois and Prade [18] . Meng et al. [19] provide further discussion
of the combination of SSs, RSs and FSs. In various decision-making problems, RSs have been used.
The existing results of RSs and other extended RSs such as RFSs, generalized RFSs, SFRSs and IFRSs
based decision-making models have their advantages and limitations [20,21]. In a different way,
RS approximations have been constructed into the IF environment and are known as IFRSs, RIFSs and
generalized IFRSs [22–24]. Zhang et al. [25,26] presented the notions of SRSs, SRIFSs, and IFSRSs,
its application in decision-making, and also introduced generalized IFSRSs. Broumi et al. [27,28]
developed a hybrid structure by combining RSs and NSs, called RNSs. They also presented interval
valued neutrosophic soft rough sets by combining interval valued neutrosophic soft sets and RSs.
Yang et al. [29] proposed single valued neutrosophic rough sets (SVNRSs) by combining SVNSs and
RSs, and established an algorithm for decision-making problems based on SVNRSs in two universes.
For some papers related to NSs and multi-criteria decision-making (MCDM), the readers are referred
to [30–38]. The notion of SRNSs is a extension of SRSs, SRIFSs, IFSRSs, introduced by Zhang et al.
motivated by the idea of single valued neutrosophic rough sets (SVNRSs) introduced, we extend the
single valued neutrosophic rough sets’ lower and upper approximations to the case of a neutrosophic
soft rough set. The concept of a neutrosophic soft rough set is introduced by coupling both the
neutrosophic soft sets and rough sets. In this research paper, we introduce the notions of SRNSs
and NSRSs as hybrid models for soft computing. Approximation operators of SRNSs and NSRSs are
described and their relevant properties are investigated in detail. We describe a mathematical approach
to handle decision-making problems in view of NSRSs. We also present an efficient algorithm of our
proposed hybrid model to solve decision-making problems.

2. Construction of Soft Rough Neutrosophic Sets


In this section, we introduce the notions of SRNSs by combining soft sets with RNSs and soft
rough neutrosophic relations (SRNRs). Soft rough neutrosophic sets consist of two basic components,
namely neutrosophic sets and soft relations, which are the mathematical basis of SRNSs. The basic idea
of soft rough neutrosophic sets is based on the approximation of sets by a couple of sets known as the
lower soft rough neutrosophic approximation and the upper soft rough neutrosophic approximation
of a set. Here, the lower and upper approximation operators are based on an arbitrary soft relation.
The concept of soft rough neutrosophic sets is an extension of the crisp set, rough set for the study
of intelligent systems characterized by inexact, uncertain or insufficient information. It is a useful
tool for dealing with uncertainty or imprecision information. The concept of neutrosophic soft
sets is powerful logic to handle indeterminate and inconsistent situations, and the theory of rough
neutrosophic sets is also powerful mathematical logic to handle incompleteness. We introduce the
notions of soft rough neutrosophic sets (SRNSs) and neutrosophic soft rough sets (NSRSs) as hybrid
models for soft computing. The rating of all alternatives is expressed with the upper soft rough
neutrosophic approximation and lower soft rough neutrosophic approximation operator and the pair
of neutrosophic sets that are characterized by truth-membership degree, indeterminacy-membership
degree, and falsity-membership degree from the view point of parameters.

Definition 1. Let Y be an initial universal set and M a universal set of parameters. For an arbitrary soft relation
P over Y × M, let Ps : Y → N ( M) be a set-valued function defined as Ps (u) = {k ∈ M | (u, k ) ∈ P}, u ∈ Y.
Let (Y, M, P) be an SAS. For any NS C = {(k, TC (k ), IC (k ), FC (k )) | k ∈ M} ∈ N ( M), where N ( M )
is a neutrosophic power set of parameter set M, the lower soft rough neutrosophic approximation (LSRNA) and

73
Axioms 2018, 7, 19

the upper soft rough neutrosophic approximation (USRNA) operators of C w.r.t (Y, M, P) denoted by P(C ) and
P(C ), are, respectively, defined as follows:

P(C ) = {(u, TP(C) (u), IP(C) (u), FP(C) (u)) | u ∈ Y },

P(C ) = {(u, TP(C) (u), IP(C) (u), FP(C) (u)) | u ∈ Y },

where   
TP(C) (u) = TC (k), IP(C) (u) = IC (k ), FP(C) (u) = FC (k ),
k ∈ Ps (u) k ∈ Ps (u) k ∈ Ps (u)
  
TP(C) (u) = TC (k), IP(C) (u) = IC (k ), FP(C) (u) = FC (k ).
k ∈ Ps (u) k ∈ Ps (u) k ∈ Ps (u)

It is observed that P(C ) and P(C ) are two NSs on Y, P(C ), P(C ) : N ( M ) → P (Y ) are referred to as the
LSRNA and the USRNA operators, respectively. The pair ( P(C ), P(C )) is called SRNS of C w.r.t (Y, M, P).

Remark 1. Let (Y, M, P) be an SAS. If C ∈ IF ( M ) and C ∈ P ( M), where IF ( M ) and P ( M ) are


intuitionistic fuzzy power set and crisp power set of M, respectively. Then, the above SRNA operators P(C ) and
P(C ) degenerate to SRIFA and SRA operators, respectively. Hence, SRNA operators are an extension of SRIFA
and SRA operators.

Example 1. Suppose that Y = {w1 , w2 , w3 , w4 , w5 } is the set of five careers under observation, and Mr. X
wants to select best suitable career. Let M = {k1 , k2 , k3 , k4 } be a set of decision parameters. The parameters
k1 , k2 , k3 and k4 stand for “aptitude”, “work value”, “skill” and “recent advancement”, respectively. Mr. X
describes the “most suitable career” by defining a soft relation P from Y to M, which is a crisp soft set as shown
in Table 1.

Table 1. Crisp soft relation P.

P w1 w2 w3 w4 w5
k1 1 1 0 1 0
k2 0 1 1 0 1
k3 0 1 0 0 0
k4 1 1 1 0 1

Ps : Y → N ( M) is a set valued function, and we have Ps (w1 ) = {k1 , k4 }, Ps (w2 ) =


{k1 , k2 , k3 , k4 }, Ps (w3 ) = {k2 , k4 }, Ps (w4 ) = {k1 } and Ps (w5 ) = {k2 , k4 }. Mr. X gives most the favorable
parameter object C, which is an NS defined as follows:

C = {(k1 , 0.2, 0.5, 0.6), (k2 , 0.4, 0.3, 0.2), (k3 , 0.2, 0.4, 0.5), (k4 , 0.6, 0.2, 0.1)}.

From the Definition 1, we have


 
TP(C) (w1 ) = TC (k ) = {0.2, 0.6} = 0.6,
k ∈ Ps (w1 )
 
I P ( C ) ( w1 ) = IC (k ) = {0.5, 0.2} = 0.2,
k ∈ Ps (w1 )
 
FP(C) (w1 ) = FC (k ) = {0.6, 0.1} = 0.1,
k ∈ Ps (w1 )

TP(C) (w2 ) = 0.6, IP(C) (w2 ) = 0.2, FP(C) (w2 ) = 0.1,

TP(C) (w3 ) = 0.6, IP(C) (w3 ) = 0.2, FP(C) (w3 ) = 0.1,

74
Axioms 2018, 7, 19

TP(C) (w4 ) = 0.2, IP(C) (w4 ) = 0.5, FP(C) (w4 ) = 0.6,

TP(C) (w5 ) = 0.6, IP(C) (w5 ) = 0.2, FP(C) (w5 ) = 0.1.

Similarly,
 
TP(C) (w1 ) = TC (k ) = {0.2, 0.6} = 0.2,
k ∈ Ps (w1 )
 
I P ( C ) ( w1 ) = IC (k ) = {0.5, 0.2} = 0.5,
k ∈ Ps (w1 )
 
FP(C) (w1 ) = FC (k ) = {0.6, 0.1} = 0.6,
k ∈ Ps (w1 )

TP(C) (w2 ) = 0.2, IP(C) (w2 ) = 0.5, FP(C) (w2 ) = 0.6,

TP(C) (w3 ) = 0.4, IP(C) (w3 ) = 0.3, FP(C) (w3 ) = 0.2,

TP(C) (w4 ) = 0.2, IP(C) (w4 ) = 0.5, FP(C) (w4 ) = 0.6,

TP(C) (w5 ) = 0.4, IP(C) (w5 ) = 0.3, FP(C) (w5 ) = 0.2.

Thus, we obtain

P(C ) = {(w1 , 0.6, 0.2, 0.1), (w2 , 0.6, 0.2, 0.1), (w3 , 0.6, 0.2, 0.1), (w4 , 0.2, 0.5, 0.6), (w5 , 0.6, 0.2, 0.1)},
P(C ) = {(w1 , 0.2, 0.5, 0.6), (w2 , 0.2, 0.5, 0.6), (w3 , 0.4, 0.3, 0.2), (w4 , 0.2, 0.5, 0.6), (w5 , 0.4, 0.3, 0.2)}.

Hence, ( P(C ), P(C )) is an SRNS of C.

Theorem 1. Let (Y, M, P) be an SAS. Then, the LSRNA and the USRNA operators P(C ) and P(C ) satisfy
the following properties for all C, D ∈ N ( M ):

(i) P(C ) =∼ P(∼ C ),


(ii) P(C ∩ D ) = P(C ) ∩ P( D ),
(iii) C ⊆ D ⇒ P(C ) ⊆ P( D ),
(iv) P(C ∪ D ) ⊇ P(C ) ∪ P( D ),
(v) P(C ) =∼ P(∼ C ),
(vi) P(C ∪ D ) = P(C ) ∪ P( D ),
(vii) C ⊆ D ⇒ P(C ) ⊆ P( D ),
(viii) P(C ∩ D ) ⊆ P(C ) ∩ P( D ),

where ∼ C is the complement of C.

Proof. (i) By definition of SRNS, we have

∼ C = {(k, FC (k), 1 − IC (k), TC (k))},


P(∼ C ) = {(u, TP(∼C) (u), IP(∼C) (u), FP(∼C) (u)) | u ∈ Y },
∼ P(∼ C ) = {(u, FP(∼C) (u), 1 − IP(∼C) (u), TP(∼C) (u)) | u ∈ Y },

where
  
FP(∼C) (u) = TC (k), IP(∼C) (u) = (1 − IC (k)), TP(∼C) (u) = FC (k).
k ∈ Ps (u) k ∈ Ps (u) k ∈ Ps (u)

Hence, ∼ P(∼ C ) = P(C ).

75
Axioms 2018, 7, 19

(ii)

P(C ∩ D ) = {(u, TP(C∩ D) (u), IP(C∩ D) (u), FP(C∩ D) (u)) | u ∈ Y }


  
= {(u, T(C∩ D) (k), I(C∩ D) (k ), F(C∩ D) (k )) | u ∈ Y }
k ∈ Ps (u) k ∈ Ps (u) k∈ Ps (u)
 
= {(u, ( TC (k) ∧ TD (k)), ( IC (k) ∨ ID (k)),
k ∈ Ps (u) k∈ Ps (u)

( FC (k) ∨ FD (k)) | u ∈ Y }
k ∈ Ps (u)
= {(u, TP(C) (u) ∧ TP( D) (u), IP(C) (u) ∨ IP( D) (u), FP(C) (u) ∨ FP( D) (u)) | u ∈ Y }
= P ( C ) ∩ P ( D ).

(iii) It can be easily proved by Definition 1.


(iv)

TP(C∪ D) (u) = TC∪ D (k )
k ∈ Ps (u)

= ( TC (k) ∨ TD (k))
k ∈ Ps (u)
 
≥ ( TC (k ) ∨ TD (k ))
k ∈ Ps (u) k ∈ Ps (u)
≥ ( TP(C) (u) ∨ TP( D) (u)),
TP(C∪ D) (u) ≥ TP(C) (u) ∪ TP( D) (u).

Similarly, we can prove that

IP (C ∪ D ) ( u ) ≤ IP (C ) ( u ) ∪ IP ( D ) ( u ),
FP(C∪ D) (u) ≤ FP(C) (u) ∪ FP( D) (u).

Thus, P(C ∪ D ) ⊇ P(C ) ∪ P( D ).


The properties (v)–(viii) of the USRNA P(C ) can be easily proved similarly.

Example 2. Considering Example 1, we have

∼ C = {(k1 , 0.6, 0.5, 0.2), (k2 , 0.2, 0.7, 0.4), (k3 , 0.5, 0.6, 0.2), (k4 , 0.1, 0.8, 0.6)},
P(∼ C ) = {(w1 , 0.6, 0.5, 0.2), (w2 , 0.6, 0.5, 0.2), (w3 , 0.2, 0.7, 0.4), (w4 , 0.6, 0.5, 0.2),
(w5 , 0.2, 0.7, 0.4)},
∼ P(∼ C ) = {(w1 , 0.2, 0.5, 0.6), (w2 , 0.2, 0.5, 0.6), (w3 , 0.4, 0.3, 0.2), (w4 , 0.2, 0.5, 0.6),
(w5 , 0.4, 0.3, 0.2)},
= P ( C ).
Let D = {(k1 , 0.4, 0.2, 0.6), (k2 , 0.5, 0.3, 0.2), (k3 , 0.5, 0.5, 0.1), (k4 , 0.6, 0.4, 0.7)},
P( D ) = {(w1 , 0.4, 0.4, 0.7), (w2 , 0.4, 0.5, 0.6), (w3 , 0.5, 0.4, 0.7), (w4 , 0.4, 0.2, 0.6),
(w5 , 0.5, 0.4, 0.7)},
C∩D = {(k1 , 0.2, 0.5, 0.6), (k2 , 0.4, 0.3, 0.2), (k3 , 0.2, 0.5, 0.5), (k4 , 0.6, 0.4, 0.7)},

76
Axioms 2018, 7, 19

P(C ∩ D ) = {(w1 , 0.2, 0.5, 0.7), (w2 , 0.2, 0.5, 0.6), (w3 , 0.4, 0.4, 0.7), (w4 , 0.2, 0.5, 0.6),
(w5 , 0.4, 0.4, 0.7)},
P(C ) ∩ P( D ) = {(w1 , 0.2, 0.5, 0.7), (w2 , 0.2, 0.5, 0.6), (w3 , 0.4, 0.4, 0.7), (w4 , 0.2, 0.5, 0.6),
(w5 , 0.4, 0.4, 0.7)},
P(C ∩ D ) = P ( C ) ∩ P ( D ),
C∪D = {(k1 , 0.4, 0.2, 0.6), (k2 , 0.5, 0.3, 0.2), (k3 , 0.5, 0.4, 0.1), (k4 , 0.6, 0.2, 0.1)},
P(C ∪ D ) = {(w1 , 0.4, 0.2, 0.6), (w2 , 0.4, 0.4, 0.6), (w3 , 0.5, 0.3, 0.2), (w4 , 0.4, 0.2, 0.6),
(w5 , 0.5, 0.3, 0.2)},
P(C ) ∪ P( D ) = {(w1 , 0.4, 0.4, 0.6), (w2 , 0.4, 0.5, 0.6), (w3 , 0.5, 0.3, 0.2), (w4 , 0.4, 0.2, 0.6),
(w5 , 0.5, 0.3, 0.2)}.

Clearly, P(C ∪ D ) ⊇ P(C ) ∪ P( D ). Hence, properties of the LSRNA operator hold, and we can easily
verify the properties of the USRNA operator.

The conventional soft set is a mapping from a parameter to the subset of universe and let ( P, M )
be a crisp soft set. In [11], Babitha and Sunil introduced the concept of soft set relation. Now, we present
the constructive definition of SRNR by using a soft relation R from M × M = Ḿ to P (Y × Y = Ý ),
where Y is a universal set and M is a set of parameter.

Definition 2. A SRNR ( R( D ), R( D )) on Y is a SRNS, R : Ḿ → P (Ý ) is a soft relation on Y defined by

R(k i k j ) = {ui u j | ∃ui ∈ P(k i ), u j ∈ P(k j )}, ui u j ∈ Ý.

Let Rs : Ý → P ( Ḿ ) be a set-valued function by

Rs (ui u j ) = {k i k j ∈ Ḿ | (ui u j , k i k j ) ∈ R}, ui u j ∈ Ý.

For any D ∈ N ( Ḿ), the USRNA and the LSRNA operators of D w.r.t (Ý, Ḿ, R) defined as follows:

R( D ) = {(ui u j , TR( D) (ui u j ), IR( D) (ui u j ), FR( D) (ui u j )) | ui u j ∈ Ý },

R( D ) = {(ui u j , TR( D) (ui u j ), IR( D) (ui u j ), FR( D) (ui u j )) | ui u j ∈ Ý },

where
 
TR( D) (ui u j ) = TD (k i k j ), IR ( D ) ( u i u j ) = ID ( k i k j ),
k i k j ∈ R s ( ui u j ) k i k j ∈ R s ( ui u j )

FR( D) (ui u j ) = FD (k i k j ),
k i k j ∈ R s ( ui u j )

 
TR( D) (ui u j ) = TD (k i k j ), IR( D) (ui u j ) = ID ( k i k j ),
k i k j ∈ R s ( ui u j ) k i k j ∈ R s ( ui u j )

FR( D) (ui u j ) = FD (k i k j ).
k i k j ∈ R s ( ui u j )

The pair ( R( D ), R( D )) is called SRNR and R, R : N ( Ḿ) → P (Ý ) are called the LSRNA and the
USRNA operators, respectively.

77
Axioms 2018, 7, 19

Remark 2. For an NS D on Ḿ and an NS C on M,

TD (k i k j ) ≤ min { TC (k i )},
ki ∈ M
ID ( k i k j ) ≤ min { IC (k i )},
ki ∈ M
FD (k i k j ) ≤ min { FC (k i )}.
ki ∈ M

According to the definition of SRNR, we get

TR( D) (ui u j ) ≤ min{ TR(C) (ui ), TR(C) (u j )},


IR ( D ) ( u i u j ) ≤ max{ IR(C) (ui ), IR(C) (u j )},
FR( D) (ui u j ) ≤ max{ FR(C) (ui ), FR(C) (u j )}.

Similarly, for the LSRNA operator R( D ),

TR( D) (ui u j ) ≤ min{ TR(C) (ui ), TR(C) (u j )},


IR ( D ) ( u i u j ) ≤ max{ IR(C) (ui ), IR(C) (u j )},
FR( D) (ui u j ) ≤ max{ FR(C) (ui ), FR(C) (u j )}.

Example 3. Let Y = {u1 , u2 , u3 } be a universal set and M = {k1 , k2 , k3 } be a set of parameters. A soft set
( P, M) on Y can be defined in tabular form (see Table 2) as follows:

Table 2. Soft set ( P, M).

P u1 u2 u3
k1 1 1 0
k2 0 0 1
k3 1 1 1

Let E = {u1 u2 , u2 u3 , u2 u2 , u3 u2 } ⊆ Ý and L = {k1 k3 , k2 k1 , k3 k2 } ⊆ Ḿ. Then, a soft relation R on E


(from L to E) can be defined in tabular form (see Table 3) as follows:

Table 3. Soft relation R.

R u1 u2 u2 u3 u2 u2 u3 u2
k1 k3 1 1 1 0
k2 k1 0 0 0 1
k3 k2 0 1 0 0

Now, we can define set-valued function Rs such that

R s ( u1 u2 ) = { k 1 k 3 }, R s ( u2 u3 ) = { k 1 k 3 , k 3 k 2 }, R s ( u2 u2 ) = { k 1 k 3 }, R s ( u3 u2 ) = { k 2 k 1 }.

Let C = {(k1 , 0.2, 0.4, 0.6), (k2 , 0.4, 0.5, 0.2), (k3 , 0.1, 0.2, 0.4)} be an NS on M, then
R(C ) = {(u1 , 0.2, 0.2, 0.4), (u2 , 0.2, 0.4, 0.4), (u3 , 0.4, 0.2, 0.2)},
R(C ) = {(u1 , 0.1, 0.4, 0.6), (u2 , 0.1, 0.4, 0.6), (u3 , 0.1, 0.5, 0.4)},
Let D = {(k1 k3 , 0.1, 0.2, 0.2), (k2 k1 , 0.1, 0.1, 0.2), (k3 k2 , 0.1, 0.2, 0.1)} be an NS on L, then
R( D ) = {(u1 u2 , 0.1, 0.2, 0.2), (u2 u3 , 0.1, 0.2, 0.1), (u2 u2 , 0.1, 0.2, 0.2), (u3 u2 , 0.1, 0.1, 0.2)},
R( D ) = {(u1 u2 , 0.1, 0.2, 0.2), (u2 u3 , 0.1, 0.2, 0.1), (u2 u2 , 0.1, 0.2, 0.2), (u3 u2 , 0.1, 0.1, 0.2)}.

Hence, R( D ) = ( R( D ), R( D )) is SRNR.

78
Axioms 2018, 7, 19

3. Construction of Neutrosophic Soft Rough Sets


In this section, we will introduce the notions of NSRSs, neutrosophic soft rough relations (NSRRs).

Definition 3. Let Y be an initial universal set and M a universal set of parameters. For an arbitrary
neutrosophic soft relation P̃ from Y to M, (Y, M, P̃) is called neutrosophic soft approximation space (NSAS).
For any NS C ∈ N ( M), we define the upper neutrosophic soft approximation (UNSA) and the lower
neutrosophic soft approximation (LNSA) operators of C with respect to (Y, M, P̃) denoted by P̃(C ) and P̃(C ),
respectively as follows:

P̃(C ) = {(u, TP̃(C) (u), IP̃(C) (u), FP̃(C) (u)) | u ∈ Y },


P̃(C ) = {(u, TP̃(C) (u), IP̃(C) (u), FP̃(C) (u)) | u ∈ Y },

where
     
TP̃(C) (u) = TP̃(C) (u, k ) ∧ TC (k) , IP̃(C) (u) = IP̃(C) (u, k ) ∨ IC (k ) ,
k∈ M k∈ M
  
FP̃(C) (u) = FP̃(C) (u, k) ∨ FC (k ) ,
k∈ M
     
TP̃(C) (u) = FP̃(C) (u, k) ∨ TC (k ) , IP̃(C) (u) = (1 − IP̃(C) (u, k)) ∧ IC (k) ,
k∈ M k∈ M
  
FP̃(C) (u) = TP̃(C) (u, k ) ∧ FC (k ) .
k∈ M

The pair ( P̃(C ), P̃(C )) is called NSRS of C w.r.t (Y, M, P̃), and P̃ and P̃ are referred to as the LNSRA
and the UNSRA operators, respectively.

Remark 3. A neutrosophic soft relation over Y × M is actually a neutrosophic soft set on Y. The NSRA
operators are defined over two distinct universes Y and M. As we know, universal set Y and parameter set M are
two different universes of discourse but have solid relations. These universes can not be considered as identical
universes; therefore, the reflexive, symmetric and transitive properties of neutrosophic soft relations from Y to M
do not exist.
Let P̃ be a neutrosophic soft relation from Y to M, if, for each u ∈ Y, there exists k ∈ M such that
TP̃ (u, k) = 1, IP̃ (u, k ) = 0, FP̃ (u, k ) = 0. Then, P̃ is referred to as a serial neutrosophic soft relation from Y to
parameter set M.

Example 4. Suppose that Y = {w1 , w2 , w3 , w4 } is the set of careers under consideration, and Mr. X wants
to select the most suitable career. M = {k1 , k2 , k3 } is a set of decision parameters. Mr. X describes the “most
suitable career” by defining a neutrosophic soft set ( P̃, M ) on Y that is a neutrosophic relation from Y to M as
shown in Table 4.

Table 4. Neutrosophic soft relation P̃.

P̃ w1 w2 w3 w4
k1 (0.3, 0.4, 0.5) (0.4, 0.2, 0.3) (0.1, 0.5, 0.4) (0.2, 0.3, 0.4)
k2 (0.1, 0.5, 0.4) (0.3, 0.4, 0.6) (0.4, 0.4, 0.3) (0.5, 0.3, 0.8)
k3 (0.3, 0.4, 0.4) (0.4, 0.6, 0.7) (0.3, 0.5, 0.4) (0.5, 0.4, 0.6)

79
Axioms 2018, 7, 19

Now, Mr. X gives the most favorable decision object C, which is an NS on M defined as follows:
C = {(k1 , 0.5, 0.2, 0.4), (k2 , 0.2, 0.3, 0.1), (k3 , 0.2, 0.4, 0.6)}. By Definition 3, we have
   
TP̃(C) (w1 ) = TP̃(C) (w1 , k ) ∧ TC (k) = {0.3, 0.1, 0.2} = 0.3,
k∈ M
   
IP̃(C) (w1 ) = IP̃(C) (w1 , k) ∨ IC (k ) = {0.4, 0.5, 0.4} = 0.4,
k∈ M
   
FP̃(C) (w1 ) = FP̃(C) (w1 , k ) ∨ FC (k ) = {0.5, 0.4, 0.6} = 0.4,
k∈ M

TP̃(C) (w2 ) = 0.4, IP̃(C) (w2 ) = 0.2, FP̃(C) (w2 ) = 0.4,

TP̃(C) (w3 ) = 0.2, IP̃(C) (w3 ) = 0.4, FP̃(C) (w3 ) = 0.3,

TP̃(C) (w4 ) = 0.2, IP̃(C) (w4 ) = 0.3, FP̃(C) (w4 ) = 0.4.

Similarly,
   
TP̃(C) (w1 ) = FP̃(C) (w1 , k ) ∨ TC (k) = {0.5, 0.4, 0.4} = 0.4,
k∈ M
   
IP̃(C) (w1 ) = (1 − IP̃(C) (w1 , k)) ∧ IC (k) = {0.2, 0.3, 0.4} = 0.4,
k∈ M
   
FP̃(C) (w1 ) = TP̃(C) (w1 , k ) ∧ FC (k) = {0.3, 0.1, 0.3} = 0.3,
k∈ M

TP̃(C) (w2 ) = 0.5, IP̃(C) (w2 ) = 0.4, FP̃(C) (w2 ) = 0.4,

TP̃(C) (w3 ) = 0.4, IP̃(C) (w3 ) = 0.4, FP̃(C) (w3 ) = 0.3,

TP̃(C) (w4 ) = 0.5, IP̃(C) (w4 ) = 0.4, FP̃(C) (w4 ) = 0.5.

Thus, we obtain

P̃(C ) = {(w1 , 0.3, 0.4, 0.4), (w2 , 0.4, 0.2, 0.4), (w3 , 0.2, 0.4, 0.3), (w4 , 0.2, 0.3, 0.4)},
P̃(C ) = {(w1 , 0.4, 0.4, 0.3), (w2 , 0.5, 0.4, 0.4), (w3 , 0.4, 0.4, 0.3), (w4 , 0.5, 0.4, 0.5)}.

Hence, ( P̃(C ), P̃(C )) is an NSRS of C.

Theorem 2. Let (Y, M, P̃) be an NSAS. Then, the UNSRA and the LNSRA operators P̃(C ) and P̃(C ) satisfy
the following properties for all C, D ∈ N ( M ):

(i) P̃(C ) =∼ P̃(∼ A),


(ii) P̃(C ∩ D ) = P̃(C ) ∩ P̃( D ),
(iii) C ⊆ D ⇒ P̃(C ) ⊆ P̃( D ),
(iv) P̃(C ∪ D ) ⊇ P̃(C ) ∪ P̃( D ),
(v) P̃(C ) =∼ P̃(∼ C ),
(vi) P̃(C ∪ D ) = P̃(C ) ∪ P̃( D ),
(vii) C ⊆ D ⇒ P̃(C ) ⊆ P̃( D ),
(viii) P̃(C ∩ D ) ⊆ P̃(C ) ∩ P̃( D ).

80
Axioms 2018, 7, 19

Proof. (i)

∼ C = {(k, FC (k), 1 − IC (k), TC (k)) | k ∈ M}.


By definition of NSRS, we have
 
P̃(∼ C )
= { u, TP̃(∼C) (u), IP̃(∼C) (u), FP̃(∼C) (u) | u ∈ Y },
 
∼ P̃(∼ C ) = { u, FP̃(∼C) (u), 1 − IP̃(∼C) (u), TP̃(∼C) (u) | u ∈ Y },
  
FP̃(∼C) (u) = FP̃ (u, k) ∨ TC (k )
k∈ M
= TP̃(C) (u),
  
1 − IP̃(∼C) (u) = 1 − [ IP̃ (u, k) ∨ I∼C (k)]
k∈ M
  
= (1 − IP̃ (u, k)) ∧ (1 − I∼C (k))
k∈ M
   
= (1 − IP̃ (u, k)) ∧ 1 − (1 − IC (k))
k∈ M
  
= (1 − IP̃ (u, k)) ∧ IC (k)
k∈ M
= IP̃(C) (u),
  
TP̃(∼C) (u) = TP̃ (u, k) ∧ T∼C (k )
k∈ M
  
= TP̃ (u, k) ∧ FC (k )
k∈ M
= FP̃(C) (u).
Thus, P̃(C ) = ∼ P̃(∼ C ).

(ii)
 
P̃(C ∩ D ) = { u, TP̃(C∩ D) (u), IP̃(C∩ D) (u), FP̃(C∩ D) (u) },
 
P̃(C ) ∩ P̃( D ) = { u, TP̃(C) (u) ∧ TP̃( D) (u), IP̃(C) (u) ∨ IP̃( D) (u), FP̃(C) (u) ∨ FP̃( D) (u) }.

Now, consider
  
TP̃(C∩ D) (u) = FP̃ (u, k ) ∨ TC∩ D (k )
k∈ M
  
= FP̃ (u, k ) ∨ ( TC (k ) ∧ TD (k ))
k∈ M
     
= FP̃ (u, k ) ∨ TC (k ) ∧ FP̃ (u, k) ∨ TD (k )
k∈ M k∈ M
= TP̃(C) (u) ∧ TP̃( D) (u),

81
Axioms 2018, 7, 19

  
IP̃(C∩ D) (u) = (1 − IP̃ (u, k)) ∧ IC∩ D (k)
k∈ M
  
= (1 − IP̃ (u, k)) ∧ ( IC (k) ∨ ID (k))
k∈ M
     
= (1 − IP̃ (u, k))) ∧ IC (k) ∨ (1 − IP̃ (u, k)) ∨ ID (k)
k∈ M k∈ M
= IP̃(C) (u) ∨ IP̃( D) (u),
  
FP̃(C∩ D) (u) = TP̃ (u, k ) ∧ FC∩ D (k )
k∈ M
  
= TP̃ (u, k ) ∧ ( FC (k ) ∨ FD (k))
k∈ M
     
= TP̃ (u, k ) ∧ FC (k ) ∨ TP̃ (u, k) ∧ FD (k )
k∈ M k∈ M
= FP̃(C) (u) ∨ FP̃( D) (u).
Thus, P̃(C ∩ D ) = P̃(C ) ∩ P̃( D ).

(iii) It can be easily proven by Definition 3.


(iv)
 
P̃(C ∪ D ) = { u, TP̃(C∪ D) (u), IP̃(C∪ D) (u), FP̃(C∪ D) (u) },
 
P̃(C ) ∪ P̃( D ) = { u, TP̃(C) (u) ∨ TP̃( D) (u), IP̃(C) (u) ∧ IP̃( D) (u), FP̃(C) (u) ∧ FP̃( D) (u) },

TP̃(C∪ D) (u) = ( FP̃ (u, k) ∨ TC∪ D (k))
k∈ M
  
= FP̃ (u, k ) ∨ [ TC (k ) ∨ TD (k )]
k∈ M
  
= [ FP̃ (u, k) ∨ TC (k)] ∨ [ FP̃ (u, k) ∨ TD (k)]
k∈ M
     
≥ FP̃ (u, k ) ∨ TC (k) ∨ FP̃ (u, k) ∨ TD (k )
k∈ M k∈ M
= TP̃(C) (u) ∨ TP̃( D) (u),
  
IP̃(C∪ D) (u) = (1 − IP̃ (u, k)) ∧ IC∪ D (k)
k∈ M
  
= (1 − IP̃ (u, k)) ∧ [ IC (k) ∧ ID (k)]
k∈ M
  
= [1 − IP̃ (u, k)) ∧ IC (k)] ∧ [(1 − IP̃ (u, k)) ∧ ID (k)]
k∈ M
     
≤ (1 − IP̃ (u, k)) ∧ IC (k) ∧ (1 − IP̃ (u, k)) ∧ ID (k)
k∈ M k∈ M
= IP̃(C) (u) ∧ IP̃( D) (u),
  
FP̃(C∪ D) (u) = TP̃ (u, k ) ∧ FC∪ D (k)
k∈ M
  
= TP̃ (u, k ) ∧ [ FC (k ) ∧ FD (k )]
k∈ M
  
= [ TP̃ (u, k) ∧ FC (k)] ∧ [ TP̃ (u, k) ∧ FD (k)]
k∈ M
     
≤ TP̃ (u, k ) ∧ FC (k ) ∧ TP̃ (u, k) ∧ FD (k )
k∈ M k∈ M
= FP̃(C) (u) ∧ FP̃( D) (u).

82
Axioms 2018, 7, 19

(vii)
 
P̃(C ∩ D ) = { u, TP̃(C∩ D) (u), IP̃(C∩ D) (u), FP̃(C∩ D) (u) },
 
P̃(C ) ∩ P̃( D ) = { u, TP̃(C) (u) ∧ TP̃( D) (u), IP̃(C) (u) ∨ IP̃( D) (u), FP̃(C) (u) ∨ FP̃( D) (u) },

TP̃(C∩ D) (u) = ( TP̃ (u, k) ∧ TC∩ D (k))
k∈ M
  
= TP̃ (u, k ) ∧ [ TC (k ) ∧ TD (k)]
k∈ M
  
= [ TP̃ (u, k) ∧ TC (k)] ∧ [ TP̃ (u, k) ∧ TD (k)]
k∈ M
     
≤ TP̃ (u, k ) ∧ TC (k) ∧ TP̃ (u, k) ∧ TD (k )
k∈ M k∈ M
= TP̃(C) (u) ∧ TP̃( D) (u),
  
IP̃(C∩ D) (u) = IP̃ (u, k) ∨ IC∩ D (k )
k∈ M
  
= IP̃ (u, k) ∨ [ IC (k ) ∨ ID (k )]
k∈ M
  
= [ IP̃ (u, k) ∨ IC (k)] ∨ [ IP̃ (u, k) ∨ ID (k)]
k∈ M
     
≥ ( IP̃ (u, k)) ∨ IC (k) ∨ ( IP̃ (u, k)) ∨ ID (k)
k∈ M k∈ M
= IP̃(C) (u) ∨ IP̃( D) (u),
  
FP̃(C∩ D) (u) = FP̃ (u, k ) ∨ FC∩ D (k )
k∈ M
  
= FP̃ (u, k ) ∨ [ FC (k) ∨ FD (k )]
k∈ M
  
= [ FP̃ (u, k) ∨ FC (k)] ∨ [ FP̃ (u, k) ∨ FD (k)]
k∈ M
     
≥ FP̃ (u, k ) ∨ FC (k) ∨ FP̃ (u, k) ∨ FD (k )
k∈ M k∈ M
= FP̃(C) (u) ∨ FP̃( D) (u).

Thus, P̃(C ∩ D ) ⊆ P̃(C ) ∩ P̃( D ).

The properties (v)–(vii) of the UNSRA operator P̃(C ) can be easily proved similarly.

Theorem 3. Let (Y, M, P̃) be an NSAS. The UNSRA and the LNSRA operators P̃ and P̃ satisfy the following
properties for all C, D ∈ N ( M):

(i) P̃(C − D ) ⊇ P̃(C ) − P̃( D ),


(ii) P̃(C − D ) ⊆ P̃(C ) − P̃( D ).

83
Axioms 2018, 7, 19

Proof. (i) By Definition 3 and definition of difference of two NSs, for all u ∈ Y,
  
TP̃(C− D) (u) = FP̃ (u, k) ∨ TC− D (k )
k∈ M
  
= FP̃ (u, k) ∨ ( TC (k ) ∧ FD (k ))
k∈ M
  
= [ FP̃ (u, k) ∨ TC (k)] ∧ [ FP̃ (u, k) ∨ FD (k)]
k∈ M
     
= FP̃ (u, k) ∨ TC (k) ∧ FP̃ (u, k) ∨ FD (k)
k∈ M k∈ M
= TP̃(C) (u) ∧ FP̃( D) (u)
= TP̃(C)− P̃( D) (u),
  
IP̃(C− D) (u) = (1 − IP̃ (u, k)) ∧ IC− D (k)
k∈ M
  
= (1 − IP̃ (u, k)) ∧ ( IC (k) ∧ (1 − ID (k)))
k∈ M
  
= [(1 − IP̃ (u, k)) ∧ IC (k)] ∧ [(1 − IP̃ (u, k)) ∧ (1 − ID (k))]
k∈ M
   
= [(1 − IP̃ (u, k)) ∧ IC (k)] ∧ [1 − IP̃ (u, k) ∨ ID (k) ]
k∈ M
      
≤ (1 − IP̃ (u, k)) ∧ IC (k) ∧ 1 − IP̃ (u, k) ∨ ID (k )
k∈ M k∈ M
      
≤ (1 − IP̃ (u, k)) ∧ IC (k) ∧ 1 − IP̃ (u, k ) ∨ ID (k )
k∈ M k∈ M
= IP̃(C) (u) ∧ (1 − IP̃( D) (u))
= IP̃(C)− P̃( D) (u),
  
FP̃(C− D) (u) = TP̃ (u, k) ∧ FC− D (k )
k∈ M
  
= TP̃ (u, k) ∧ ( FC (k ) ∧ TD (k ))
k∈ M
  
= [ TP̃ (u, k) ∧ FC (k)] ∧ [ TP̃ (u, k) ∧ TD (k)]
k∈ M
     
≤ TP̃ (u, k ) ∧ FC (k) ∧ TP̃ (u, k) ∧ TD (k)
k∈ M k∈ M
= FP̃(C) (u) ∧ TP̃( D) (u)
= FP̃(C)− P̃( D) (u).

Thus, P̃(C − D ) ⊆ P̃(C ) − P̃( D ).

(ii) By Definition 3 and definition of difference of two NSs, for all u ∈ Y,

84
Axioms 2018, 7, 19

  
TP̃(C− D) (u) = TP̃ (u, k) ∧ TC− D (k )
k∈ M
  
= TP̃ (u, k) ∧ ( TC (k) ∧ FD (k ))
k∈ M
  
= [ TP̃ (u, k) ∧ TC (k)] ∧ [ TP̃ (u, k) ∧ FD (k)]
k∈ M
     
≤ TP̃ (u, k) ∧ TC (k ) ∧ TP̃ (u, k ) ∧ FD (k)
k∈ M k∈ M
= TP̃(C) (u) ∧ FP̃( D) (u)
= TP̃(C)− P̃( D) (u),
  
IP̃(C− D) (u) = IP̃ (u, k ) ∨ IC− D (k )
k∈ M
  
= IP̃ (u, k ) ∨ ( IC (k ) ∧ (1 − ID (k )))
k∈ M
  
= [ IP̃ (u, k) ∨ IC (k)] ∧ [ IP̃ (u, k) ∨ (1 − ID (k))]
k∈ M
  
= [ IP̃ (u, k) ∨ IC (k)] ∧ [1 − (1 − IP̃ (u, k)) ∨ (1 − ID (k))]
k∈ M
    
= ( IP̃ (u, k) ∨ IC (k)) ∧ 1 − (1 − IP̃ (u, k)) ∧ ID (k)
k∈ M k∈ M
= IP̃(C) (u) ∧ (1 − IP̃( D) (u))
= IP̃(C)− P̃( D) (u),
  
FP̃(C− D) (u) = FP̃ (u, k ) ∨ FC− D (k )
k∈ M
  
= FP̃ (u, k ) ∨ ( FC (k ) ∧ TD (k ))
k∈ M
  
= [ FP̃ (u, k) ∨ FC (k)] ∧ [ FP̃ (u, k) ∨ TD (k)]
k∈ M
     
= FP̃ (u, k ) ∨ FC (k ) ∧ FP̃ (u, k) ∨ TD (k )
k∈ M k∈ M
= FP̃(C) (u) ∧ TP̃( D) (u)
= FP̃(C)− P̃( D) (u).

Thus, P̃(C − D ) ⊆ P̃(C ) − P̃( D ).

Theorem 4. Let (Y, M, P̃) be an NSAS. If P̃ is serial, then the UNSA and the LNSA operators P̃ and P̃ satisfy
the following properties for all ∅, M, C ∈ N ( M):

(i) P̃(∅) = ∅, P̃(M) = Y,


(ii) P̃(C ) ⊆ P̃(C ).

85
Axioms 2018, 7, 19

Proof. (i)

P̃(∅)= {(u, TP̃(∅) (u), IP̃(∅) (u), FP̃(∅) (u)) | u ∈ Y },


  
TP̃(∅) (u) = TP̃ (u, k ) ∧ T∅ (k ) ,
k∈ M
  
IP̃(∅) (u) = IP̃ (u, k ) ∨ I∅ (k) ,
k∈ M
  
FP̃(∅) (u) = FP̃ (u, k ) ∨ F∅ (k ) .
k∈ M

Since ∅ is a null NS on M, T∅ (k) = 0, I∅ (k ) = 1, F∅ (k ) = 1, and this implies


TP̃(∅) (u) = 0, IP̃ (u) = 1, FP̃ (u) = 1. Thus, P̃(∅) = ∅.

Now,

P̃(M) = {(u, TP̃(M) (u), IP̃(M) (u), FP̃(M) (u)) | u ∈ Y },


     
TP̃(M) (u) = FP̃ (u, k ) ∨ TM (k ) , IP̃(M) (u) = (1 − IP̃ (u, k)) ∧ IM (k) ,
k∈ M k∈ M
  
FP̃(M) (u) = TP̃ (u, k ) ∧ FM (k ) .
k∈ M

Since M is full NS on M, TM (k) = 1, IM (k ) = 0, FM (k) = 0, for all k ∈ M, and this implies


TP̃(M) (u) = 1, IP̃(M) (u) = 0, FP̃(M) (u) = 0. Thus, P̃(M) = Y.
(ii) Since (Y, M, P̃) is an NSAS and P̃ is a serial neutrosophic soft relation, then, for each u ∈ Y, there
exists k ∈ M, such that TP̃ (u, k ) = 1, IP̃ (u, k) = 0, and FP̃ (u, k) = 0. The UNSRA and LNSRA
operators P̃(C ), and P̃(C ) of an NS C can be defined as:
 
TP̃(C) (u) = TC (k ), IP̃(C) (u) = IC (k ),
k∈ M k∈ M

FP̃(C) (u) = FC (k ),
k∈ M

 
TP̃(C) (u) = TC (k ), IP̃(C) (u) = IC (k ),
k∈ M k∈ M

FP̃(C) (u) = FC (k ).
k∈ M

Clearly, TP̃(C) (u) ≤ TP̃(C) (u), IP̃(C) (u) ≥ TP̃(C) (u), FP̃(C) (u) ≥ FP̃(C) (u) for all u ∈ Y.
Thus, P̃(C ) ⊆ P̃(C ).

The conventional NSS is a mapping from a parameter to the neutrosophic subset of universe and
let ( P̃, M ) be NSS. Now, we present the constructive definition of neutrosophic soft rough relation by
using a neutrosophic soft relation R̃ from M × M = Ḿ to N (Y × Y = Ý ), where Y is a universal set
and M is a set of parameters.

Definition 4. A neutrosophic soft rough relation ( R̃( D ), R̃( D )) on Y is an NSRS, R̃ : Ḿ → N (Ý ) is


a neutrosophic soft relation on Y defined by

R̃(k i k j ) = {ui u j | ∃ui ∈ P̃(k i ), u j ∈ P̃(k j )}, ui u j ∈ Ý,

86
Axioms 2018, 7, 19

such that

TR̃ (ui u j , k i k j ) ≤ min{ TP̃ (ui , k i ), TP̃ (u j , k j )},


IR̃ (ui u j , k i k j ) ≤ max{ IP̃ (ui , k i ), IP̃ (u j , k j )},
FR̃ (ui u j , k i k j ) ≤ max{ FP̃ (ui , k i ), FP̃ (u j , k j )}.

For any D ∈ N ( Ḿ ), the UNSA and the LNSA of B w.r.t (Ý, Ḿ, R̃) are defined as follows:

R̃( D ) = {(ui u j , TR̃( D) (ui u j ), IR̃( D) (ui u j ), FR̃( D) (ui u j )) | ui u j ∈ Ý },

R̃( D ) = {(ui u j , TR̃( D) (ui u j ), IR̃( D) (ui u j ), FR̃( D) (ui u j )) | ui u j ∈ Ý },

where
  
TR̃( D) (ui u j ) = TR̃ (ui u j , k i k j ) ∧ TD (k i k j ) ,
k i k j ∈ Ḿ
  
IR̃( D) (ui u j ) = IR̃ (ui u j , k i k j ) ∨ ID (k i k j ) ,
k i k j ∈ M̃
  
FR̃( D) (ui u j ) = FR̃ (ui u j , k i k j ) ∨ FD (k i k j ) ,
k i k j ∈ M̃

  
TR̃( D) (ui u j ) = FR̃ (ui u j , k i k j ) ∨ TD (k i k j ) ,
k i k j ∈ Ḿ
  
IR̃( D) (ui u j ) = (1 − IR̃ (ui u j , k i k j )) ∧ ID (k i k j ) ,
k i k j ∈ M̃
  
FR̃( D) (ui u j ) = TR̃ (ui u j , k i k j ) ∧ FD (k i k j ) .
k i k j ∈ M̃

The pair ( R̃( D ), R̃( D )) is called NSRR and R̃, R̃ : N ( Ḿ ) → N (Ý ) are called the LNSRA and the
UNSRA operators, respectively.

Remark 4. Consideer an NS D on Ḿ and an NS C on M,

TD (k i k j ) ≤ min{ TC (k i ), TC (k j )},
ID ( k i k j ) ≤ max{ IC (k i ), IC (k j )},
FD (k i k j ) ≤ max{ FC (k i ), FC (k j )}.

According to the definition of NSRR, we get

TR̃( D) (ui u j ) ≤ min{ TR̃(C) (ui ), TR̃(C) (u j )},


IR̃( D) (ui u j ) ≤ max{ IR̃(C) (ui ), IR̃(C) (u j )},
FR̃( D) (ui u j ) ≤ max{ FR̃(C) (ui ).FR̃(C) (u j )}.

87
Axioms 2018, 7, 19

Similarly, for LNSRA operator R̃( D ),

TR̃( D) (ui u j ) ≤ min{ TR̃(C) (ui ), TR̃(C) (u j )},


IR̃( D) (ui u j ) ≤ max{ IR̃(C) (ui ), IR̃(C) (u j )},
FR̃( D) (ui u j ) ≤ max{ FR̃(C) (ui ).FR̃(C) (u j )}.

Example 5. Let Y = {u1 , u2 , u3 } be a universal set and M = {k1 , k2 , k3 } a set of parameters. A neutrosophic
soft set ( P̃, M ) on Y can be defined in tabular form (see Table 5) as follows:

Table 5. Neutrosophic soft set ( P̃, M).

P̃ u1 u2 u3
k1 (0.4, 0.5, 0.6) (0.7, 0.3, 0.2) (0.6, 0.3, 0.4)
k2 (0.5, 0.3, 0.6) (0.3, 0.4, 0.3) (0.7, 0.2, 0.3)
k3 (0.7, 0.2, 0.3) (0.6, 0.5, 0.4) (0.7, 0.2, 0.4)

Let E = {u1 u2 , u2 u3 , u2 u2 , u3 u2 } ⊆ Ý and L = {k1 k3 , k2 k1 , k3 k2 } ⊆ Ḿ.

Then, a soft relation R̃ on E (from L to E) can be defined in tabular form (see Table 6) as follows:

Table 6. Neutrosophic soft relation R̃.

R̃ u1 u2 u2 u3 u2 u2 u3 u2
k1 k3 (0.4, 0.4, 0.5) (0.6, 0.3, 0.4) (0.5, 0.4, 0.2) (0.5, 0.4, 0.3)
k2 k1 (0.3, 0.3, 0.4) (0.3, 0.2, 0.3) (0.2, 0.3, 0.3) (0.7, 0.2, 0.2)
k3 k2 (0.3, 0.3, 0.2) (0.5, 0.3, 0.2) (0.2, 0.4, 0.4) (0.3, 0.4, 0.4)

Let C = {(k1 , 0.2, 0.4, 0.6), (k2 , 0.4, 0.5, 0.2), (k3 , 0.1, 0.2, 0.4)} be an NS on M, then
R̃(C ) = {(u1 , 0.4, 0.2, 0.4), (u2 , 0.3, 0.4, 0.3), (u3 , 0.4, 0.2, 0.3)},
R̃(C ) = {(u1 , 0.3, 0.5, 0.4), (u2 , 0.2, 0.5, 0.6), (u3 , 0.4, 0.5, 0.6)},
Let B = {(k1 k3 , 0.1, 0.3, 0.5), (k2 k1 , 0.2, 0.4, 0.3), (k3 k2 , 0.1, 0.2, 0.3)} be an NS on L, then
R̃( D ) = {(u1 u2 , 0.2, 0.3, 0.3), (u2 u3 , 0.2, 0.3, 0.3), (u2 u2 , 0.2, 0.4, 0.3), (u3 u2 , 0.2, 0.4, 0.3)},
R̃( D ) = {(u1 u2 , 0.2, 0.4, 0.4), (u2 u3 , 0.2, 0.4, 0.5), (u2 u2 , 0.3, 0.4, 0.5), (u3 u2 , 0.2, 0.4, 0.5)}.
Hence, R̃( D ) = ( R̃( D ), R̃( D )) is NSRR.

Theorem 5. Let P̃1 , P̃2 be two NSRRs from universal Y to a parameter set M; for all C ∈ N ( M ), we have

(i) P̃1 ∪ P̃2 (C ) = P̃1 (C ) ∩ P̃2 (C ),


(ii) P̃1 ∪ P̃2 (C ) = P̃1 (C ) ∪ P̃2 (C ).

Theorem 6. Let P̃1 , P̃2 be two neutrosophic soft relations from universal Y to a parameter set M; for all
C ∈ N ( M ), we have

(i) P̃1 ∩ P̃2 (C ) ⊇ P̃1 (C ) ∪ P̃2 (C ) ⊇ P̃1 (C ) ∩ P̃2 (C ),


(ii) P̃1 ∩ P̃2 (C ) ⊆ P̃1 (C ) ∩ P̃2 (C ).

88
Axioms 2018, 7, 19

4. Application
In this section, we apply the concept of NSRSs to a decision-making problem. In recent times,
the object recognition problem has gained considerable importance. The object recognition problem
can be considered as a decision-making problem, in which final identification of object is founded on
a given amount of information. A detailed description of the algorithm for the selection of the most
suitable object based on an available set of alternatives is given, and the proposed decision-making
method can be used to calculate lower and upper approximation operators to address deep concerns
of the problem. The presented algorithms can be applied to avoid lengthy calculations when dealing
with a large number of objects. This method can be applied in various domains for multi-criteria
selection of objects. A multicriteria decision making (MCDM) can be modeled using neutrosophic soft
rough sets and is ideally suited for solving problems.
In the pharmaceutical industry, different pharmaceutical companies develop, produce and
discover pharmaceutical medicines (drugs) for use as medication. These pharmaceutical companies
deal with “brand name medicine” and “generic medicine”. Brand name medicine and generic medicine
are bioequivalent, have a generic medicine rate and element of absorption. Brand name medicine and
generic medicine have the same active ingredients, but the inactive ingredients may differ. The most
important difference is cost. Generic medicine is less expensive as compared to brand names in
comparison. Usually, generic drug manufacturers have competition to produce products that cost less.
The product may possibly be slightly dissimilar in color, shape, or markings. The major difference is
cost. We consider a brand name drug “u = Claritin (loratadink)” with an ideal neutrosophic value
number nu = (1, 0, 0) used for seasonal allergy medication. Consider

Y = {u1 = Nasacort Aq (Triamcinolone), u2 = Zyrtec D (Cetirizine/Pseudoephedrine),


u3 = Sudafed (Pseudoephedrine), u4 = Claritin-D (loratadine/pseudoephedrine),
u5 = Flonase (Fluticasone)}

is a set of generic versions of “Clarition”. We want to select the most suitable generic version of Claritin
on the basis of parameters e1 = Highly soluble, e2 = Highly permeable, e3 = Rapidly dissolving.
M = {e1 , e2 , e3 } be a set of paraments. Let P̃ be a neutrosophic soft relation from Y to parameter set
M as shown in Table 7.

Table 7. Neutrosophic soft set ( P̃, M).

P̃ e1 e2 e3
u1 (0.4, 0.5, 0.6) (0.7, 0.3, 0.2) (0.6, 0.3, 0.4)
u2 (0.5, 0.3, 0.6) (0.3, 0.4, 0.3) (0.7, 0.2, 0.3)
u3 (0.7, 0.2, 0.3) (0.6, 0.5, 0.4) (0.7, 0.2, 0.4)
u4 (0.5, 0.7, 0.5) (0.8, 0.4, 0.6) (0.8, 0.7, 0.6)
u5 (0.6, 0.5, 0.4) (0.7, 0.8, 0.5) (0.7, 0.3, 0.5)

Suppose C = {(e1 , 0.2, 0.4, 0.5), (e2 , 0.5, 0.6, 0.4), (e3 , 0.7, 0.5, 0.4)} is the most favorable object
that is an NS on the parameter set M under consideration. Then, ( P̃(C ), P̃(C )) is an NSRS in
NSAS (Y, M, P̃), where

P̃(C ) = {(u1 , 0.6, 0.5, 0.4), (u2 , 0.7, 0.4, 0.4), (u3 , 0.7, 0.4, 0.4), (u4 , 0.7, 0.6, 0.5), (u5 , 0.7, 0.5, 0.5)},
P̃(C ) = {(u1 , 0.5, 0.6, 0.4), (u2 , 0.5, 0.6, 0.5), (u3 , 0.3, 0.3, 0.5), (u4 , 0.5, 0.6, 0.5), (u5 , 0.4, 0.5, 0.5)}.

89
Axioms 2018, 7, 19

In [6], the sum of two neutrosophic numbers is defined. The sum of LNSRA and the UNSRA
operators P̃(C ) and P̃(C ) is an NS P̃(C ) ⊕ P̃(C ) defined by

P̃(C ) ⊕ P̃(C ) = {(u1 , 0.8, 0.3, 0.16), (u2 , 0.85, 0.24, 0.2), (u3 , 0.79, 0.2, 0.2), (u4 , 0.85, 0.36, 0.25),
(u5 , 0.82, 0.25, 0.25)}.

Let nui = ( Tnui , Inui , Fnui ) be a neutrosophic value number of generic versions medicine ui . We can
calculate the cosine similarity measure S(nui , nu ) between each neutrosophic value number nui of
generic version ui and ideal value number nu of brand name drug u, and the grading of all generic
version medicines of Y can be determined. The cosine similarity measure is calculated as the inner
product of two vectors divided by the product of their lengths. It is the cosine of the angle between
the vector representations of two neutrosophic soft rough sets. The cosine similarity measure is
a fundamental measure used in information technology. In [3], the cosine similarity is measured
between neutrosophic numbers and demonstrates that the cosine similarity measure is a special case
of the correlation coefficient in SVNS. Then, a decision-making method is proposed by the use of
the cosine similarity measure of SVNSs, in which the evaluation information for alternatives with
respect to criteria is carried out by truth-membership degree, indeterminacy-membership degree,
and falsity-membership degree under single-valued neutrosophic environment. It defined as follows:

Tnu · Tnui + Inu · Inui + Fnu · Fnui


S ( n u , n ui ) =   . (1)
Tn2u + Tn2u + Fn2u + Tn2u + Tn2u + Fn2u
i i i

Through the cosine similarity measure between each object and the ideal object, the ranking order
of all objects can be determined and the best object can be easily identified as well. The advantage is
that the proposed MCDM approach has some simple tools and concepts in the neutrosophic similarity
measure approach among the existing ones. An illustrative application shows that the proposed
method is simple and effective.
The generic version medicine ui with the larger similarity measure S(nui , nu ) is the most suitable
version ui because it is close to the brand name drug u. By comparing the cosine similarity measure
values, the grading of all generic medicines can be determined, and we can find the most suitable
generic medicine after selection of suitable NS of parameters. By Equation (1), we can calculate the
cosine similarity measure between neutrosophic value numbers nu of u and nui of ui as follows:

S ( n u , n u1 ) = 0.9203, S(nu , nu2 ) = 0.9386, S(nu , nu3 ) = 0.9415,


S ( n u , n u4 ) = 0.8888 S(nu , nu5 ) = 0.9183.

We get S(nu , nu3 ) > S(nu , nu2 ) > S(nu , nu1 ) > S(nu , nu5 ) > S(nu , nu4 ). Thus, the optimal decision
is u3 , and the most suitable generic version of Claritin is Sudafed (Pseudoephedrine). We have used
software MATLAB (version 7, MathWorks, Natick, MA, USA) for calculations in the application. The
flow chart of the algorithm is general for any number of objects with respect to certain parameters.
The flow chart of our proposed method is given in Figure 1. The method is presented as an algorithm
in Algorithm 1.

90
Axioms 2018, 7, 19

6UDUW

5HDG WKH QXPEHU RI HOHPHQWV LQ XQLYHUVDO VHW Y


DQG QXPEHU RI HOHPHQWV LQ SDUDPHWHU VHW M.
5HDG QHXWURVRSKLF VRIW UHODWLRQ P̃
DQG QHXWURVRSKLF VHW C RQ M 

TP̃ (C) = zeros(n, 1)


IP̃ (C) = ones(n, 1)
FP̃ (C) = ones(n, 1) k =k+1
TP̃ (C) = ones(n, 1)
IP̃ (C) = zeros(n, 1)
FP̃ (C) = zeros(n, 1)

LI
)DOVH
P̃ (C) = P̃ (C)

7UXH

P̃ (C) ⊕ P̃ (C) = zeros(n, 3)


l =l+1

nu = (1, 0, 0)

S(nu , nui ) = zeros(n, 1)


i=i+1

D = max(S)
ISULQWI ³ LW LV D QHXWURVRSKLF VHW RQ XQLYHUVDO VHW

ISULQWI ³ \RX FDQ FKRLFH WKH HOHPHQW uM

6WRS

Figure 1. Flow chart for selection of most suitable objects.

91
Axioms 2018, 7, 19

Algorithm 1: Algorithm for selection of the most suitable objects

1. Begin
2. Input the number of elements in universal set Y = {u1 , u2 , . . . , un }.
3. Input the number of elements in parameter set M = {e1 , e2 , . . . , em }.
4. Input a neutrosophic soft relation P̃ from Y to M.
5. Input an NS C on M.
6. if size( P̃) = [n, 3 ∗ m]
7. fprintf( size of neutrosophic soft relation from universal set to parameter
set is not correct, it should be of order %dx%d;  , n, 3 ∗ m)
8. error( Dimemsion of neutrosophic soft relation on vertex set is not correct. ’)
9. end
10. if size(C ) = [m, 3]
11. fprintf( size of NS on parameter set is not correct,
it should be of order %dx3; ’,m)
12. error(’Dimemsion of NS on parameter set is not correct.’)
13. end
14. TP̃(C) = zeros(n, 1);
15. IP̃(C) = ones(n, 1);
16. FP̃(C) = ones(n, 1);
17. TP̃(C) = ones(n, 1);
18. IP̃(C) = zeros(n, 1);
19. FP̃(C) = zeros(n, 1);
20. if size( P̃) == [n, 3 ∗ m]
21. if size(C ) == [m, 3]
22. if P̃ >= 0 && P̃ <= 1
23. if C >= 0 && C <= 1
24. for i = 1 : n
25. for k = 1 : m
26. j=3*k-2;
27. TP̃(C) (i, 1) = max( TP̃(C) (i, 1), min( P̃(i, j), C (k, 1)));
28. IP̃(C) (i, 1) = min( IP̃(C) (i, 1), max( P̃(i, j + 1), C (k, 2)));
29. FP̃(C) (i, 1) = min( FP̃(C) (i, 1), max( P̃(i, j + 2), C (k, 3)));
30. TP̃(C) (i, 1) = min( TP̃(C) (i, 1), max( P̃(i, j + 2), C (k, 1)));
31. IP̃(C) (i, 1) = max( IP̃(C) (i, 1), min((1 − P̃(i, j + 1)), C (k, 2)));
32. FP̃(C) (i, 1) = max( FP̃(C) (i, 1), min( P̃(i, j), C (k, 3)));
33. end
34. end
35. P̃(C ) = ( TP̃(C) , IP̃(C) , FP̃(C) )
36. P̃(C ) = ( TP̃(C) , IP̃(C) , FP̃(C) )
37. if P̃(C ) == P̃(C )
38. fprintf( it is a neutrosophic set on universal set.  )
39. else
40. fprintf(it is an NSRS on universal set.  )
41. P̃(C ) ⊕ P̃(C ) = zeros(n, 3);

92
Axioms 2018, 7, 19

42. for i=1:n


43. TP̃(C) (i ) ⊕ TP̃(C) (i ) = TP̃(C) (i ) + TP̃(C) (i )
− TP̃(C) (i ). ∗ TP̃(C) (i );
44. IP̃(C) (i ) ⊕ IP̃(C) (i ) = IP̃(C) (i ). ∗ IP̃(C) (i );
45. FP̃ (C )(i ) ⊕ FP̃(C) (i ) = FP̃(C) (i ). ∗ FP̃(C) (i );
46. end
47. nu = (1, 0, 0);
48. S(nu , nui ) = zeros(n, 1);
49. for i=1:n
Tnu · Tnui + Inu · Inui + Fnu · Fnui
50. S ( n u , n ui ) =   ;
Tnu + Tn2u + Fn2u + Tn2u + Tn2u + Fn2u
2
i i i
51. end
52. S ( n u , n ui )
53. D=max(S);
54. l=0;
55. m=zeros(n,1);
56. D2=zeros(n,1);
57. for j=1:n
58. if S(j,1)==D
59. l=l+1;
60. D2(j,1)=S(j,1);
61. m(j)=j;
62. end
63. end
64. for j = 1 : n
65. if m( j) = 0
66. fprintf( you can choice the element u%d  ,j)
67. end
68. end
69. end
70. end
71. end
72. end
73. end
74. End

5. Conclusions and Future Directions


Rough set theory can be considered as an extension of classical set theory. Rough set theory
is a very useful mathematical model to handle vagueness. NS theory, RS theory and SS theory are
three useful distinguished approaches to deal with vagueness. NS and RS models are used to handle
uncertainty, and combining these two models with another remarkable model of SSs gives more
precise results for decision-making problems. In this paper, we have first presented the notion of
SRNSs. Furthermore, we have introduced NSRSs and investigated some properties of NSRSs in detail.
The notion of NSRS can be utilized as a mathematical tool to deal with imprecise and unspecified
information. In addition, a decision-making method based on NSRSs has been proposed. This research
work can be extended to (1) rough bipolar neutrosophic soft sets; (2) bipolar neutrosophic soft rough
sets; (3) interval-valued bipolar neutrosophic rough sets; and (4) neutrosophic soft rough graphs.

93
Axioms 2018, 7, 19

Author Contributions: Muhammad Akram and Sundas Shahzadi conceived and designed the experiments;
Florentin Smarandache analyzed the data; Sundas Shahzadi wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic; American Research Press:
Rehoboth, DE, USA, 1998; 105p.
2. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single-valued neutrosophic sets. Multispace Multistruct.
2010, 4, 410–413.
3. Ye, J. Multicriteria decision-making method using the correlation coefficient under single-valued neutrosophic
environment. Int. J. Gen. Syst. 2013, 42, 386–394.
4. Ye, J. Improved correlation coefficients of single valued neutrosophic sets and interval neutrosophic sets for
multiple attribute decision making. J. Intell. Fuzzy Syst. 2014, 27, 2453–2462.
5. Ye, J.; Fu, J. Multi-period medical diagnosis method using a single valued neutrosophic similarity measure
based on tangent function. Comput. Methods Prog. Biomed. 2016, 123, 142–149.
6. Peng, J.J.; Wang, J.Q.; Zhang, H.Y.; Chen, X.H. An outranking approach for multi-criteria decision-making
problems with simplified neutrosophic sets. Appl. Soft Comput. 2014, 25, 336–346.
7. Molodtsov, D.A. Soft set theory-first results. Comput. Math. Appl. 1999, 37, 19–31.
8. Maji, P.K.; Biswas, R.; Roy, A.R. Fuzzy soft sets. J. Fuzzy Math. 2001, 9, 589–602.
9. Maji, P.K.; Biswas, R.; Roy, A.R. Intuitionistic fuzzy soft sets. J. Fuzzy Math. 2001, 9, 677–692.
10. Maji, P.K. Neutrosophic soft set. Ann. Fuzzy Math. Inform. 2013, 5, 157–168.
11. Babitha, K.V.; Sunil, J.J. Soft set relations and functions. Comput. Math. Appl. 2010, 60, 1840–1849.
12. Sahin, R.; Kucuk, A. On similarity and entropy of neutrosophic soft sets. J. Intell. Fuzzy Syst. Appl. Eng. Technol.
2014, 27, 2417–2430.
13. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356.
14. Ali, M. A note on soft sets, rough sets and fuzzy soft sets. Appl. Soft Comput. 2011, 11, 3329–3332.
15. Feng, F.; Liu, X.; Leoreanu-Fotea, B.; Jun, Y.B. Soft sets and soft rough sets. Inf. Sci. 2011, 181, 1125–1137.
16. Shabir, M.; Ali, M.I.; Shaheen, T. Another approach to soft rough sets. Knowl.-Based Syst. 2013, 40, 72–80.
17. Feng, F.; Li, C.; Davvaz, B.; Ali, M.I. Soft sets combined with fuzzy sets and rough sets: A tentative approach.
Soft Comput. 2010, 14, 899–911.
18. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209.
19. Meng, D.; Zhang, X.; Qin, K. Soft rough fuzzy sets and soft fuzzy rough sets. Comput. Math. Appl. 2011, 62,
4635–4645.
20. Sun, B.Z.; Ma, W.; Liu, Q. An approach to decision making based on intuitionistic fuzzy rough sets over
two universes. J. Oper. Res. Soc. 2013, 64, 1079–1089.
21. Sun, B.Z.; Ma, W. Soft fuzzy rough sets and its application in decision making. Artif. Intell. Rev. 2014, 41,
67–80.
22. Zhang, X.; Dai, J.; Yu, Y. On the union and intersection operations of rough sets based on various
approximation spaces. Inf. Sci. 2015, 292, 214–229.
23. Zhang, H.; Shu, L. Generalized intuitionistic fuzzy rough set based on intuitionistic fuzzy covering. Inf. Sci.
2012, 198, 186–206.
24. Zhang, X.; Zhou, B.; Li, P. A general frame for intuitionistic fuzzy rough sets. Inf. Sci. 2012, 216, 34–49.
25. Zhang, H.; Shu, L.; Liao, S. Intuitionistic fuzzy soft rough set and its application in decision making.
Abstr. Appl. Anal. 2014, 2014, 13.
26. Zhang, H.; Xiong, L.; Ma, W. Generalized intuitionistic fuzzy soft rough set and its application in decision
making. J. Comput. Anal. Appl. 2016, 20, 750–766.
27. Broumi, S.; Smarandache, F. Interval-valued neutrosophic soft rough sets. Int. J. Comput. Math. 2015,
2015, 232919.
28. Broumi, S.; Smarandache, F.; Dhar, M. Rough Neutrosophic sets. Neutrosophic Sets Syst. 2014, 3, 62–67.
29. Yang, H.L.; Zhang, C.L.; Guo, Z.L.; Liu, Y.L.; Liao, X. A hybrid model of single valued neutrosophic sets and
rough sets: Single valued neutrosophic rough set model. Soft Comput. 2016, 21, 6253–6267.

94
Axioms 2018, 7, 19

30. Faizi, S.; Salabun, W.; Rashid, T.; Watrbski, J.; Zafar, S. Group decision-making for hesitant fuzzy sets based
on characteristic objects method. Symmetry 2017, 9, 136.
31. Faizi, S.; Rashid, T.; Salabun, W.; Zafar, S.; Watrbski, J. Decision making with uncertainty using hesitant
fuzzy sets. Int. J. Fuzzy Syst. 2018, 20, 93–103.
32. Mardani, A.; Nilashi, M.; Antucheviciene, J.; Tavana, M.; Bausys, R.; Ibrahim, O. Recent Fuzzy Generalisations
of Rough Sets Theory: A Systematic Review and Methodological Critique of the Literature. Complexity 2017,
2017, 33.
33. Liang, R.X.; Wang, J.Q.; Zhang, H.Y. A multi-criteria decision-making method based on single-valued
trapezoidal neutrosophic preference relations with complete weight information. Neural Comput. Appl. 2017,
1–16, doi:10.1007/s00521-017-2925-8.
34. Liang, R.; Wang, J.; Zhang, H. Evaluation of e-commerce websites: An integrated approach under
a single-valued trapezoidal neutrosophic environment. Knowl.-Based Syst. 2017, 135, 44–59.
35. Peng, H.G.; Zhang, H.Y.; Wang, J.Q. Probability multi-valued neutrosophic sets and its application in multi-
criteria group decision-making problems. Neural Comput. Appl. 2016, 1–21, doi:10.1007/s00521-016-2702-0.
36. Wang, L.; Zhang, H.Y.; Wang, J.Q. Frank Choquet Bonferroni mean operators of bipolar neutrosophic sets and
their application to multi-criteria decision-making problems. Int. J. Fuzzy Syst. 2018, 20, 13–28.
37. Zavadskas, E.K.; Bausys, R.; Kaklauskas, A.; Ubarte, I.; Kuzminske, A.; Gudiene, N. Sustainable market
valuation of buildings by the single-valued neutrosophic MAMVA method. Appl. Soft Comput. 2017, 57,
74–87.
38. Li, Y.; Liu, P.; Chen, Y. Some single valued neutrosophic number heronian mean operators and their
application in multiple attribute group decision making. Informatica 2016, 27, 85–110.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

95
axioms
Article
Neutrosophic Soft Rough Graphs with Application
Muhammad Akram 1, *, Hafsa M. Malik 1 , Sundas Shahzadi 1 and Florentin Smarandache 2
1 Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan;
[email protected] (H.M.M.); [email protected] (S.S.)
2 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
* Correspondence: [email protected]; Tel.: +92-42-99231241

Received: 27 January 2018; Accepted: 23 February 2018; Published: 26 February 2018

Abstract: Neutrosophic sets (NSs) handle uncertain information while fuzzy sets (FSs) and
intuitionistic fuzzy sets (IFs) fail to handle indeterminate information. Soft set theory, neutrosophic
set theory, and rough set theory are different mathematical models for handling uncertainties and
they are mutually related. The neutrosophic soft rough set (NSRS) model is a hybrid model by
combining neutrosophic soft sets with rough sets. We apply neutrosophic soft rough sets to graphs.
In this research paper, we introduce the idea of neutrosophic soft rough graphs (NSRGs) and describe
different methods of their construction. We consider the application of NSRG in decision-making
problems. In particular, we develop efficient algorithms to solve decision-making problems.

Keywords: neutrosophic soft rough sets; neutrosophic soft rough graphs; decision-making; algorithm

1. Introduction
Smarandache [1] initiated the concept of neutrosophic set (NS). Smarandache’s NS is characterized
by three parts: truth, indeterminacy, and falsity. Truth, indeterminacy and falsity membership
values behave independently and deal with problems having uncertain, indeterminant and imprecise
data. Wang et al. [2] gave a new concept of single valued neutrosophic sets (SVNSs) and defined
the set theoretic operators on an instance of NS called SVNS. Peng et al. [3] discussed the
operations of simplified neutrosophic numbers and introduced an outranking idea of simplified
neutrosophic numbers.
Molodtsov [4] introduced the notion of soft set (SS) as a novel mathematical approach for handling
uncertainties. Molodtsov’s SSs gave us a new technique for dealing with uncertainty from the
viewpoint of parameters. Maji et al. [5–7] introduced neutrosophic soft sets (NSSs), intuitionistic
fuzzy soft sets and fuzzy soft sets (FSSs). In [8], Sahin and Kucuk presented NSS in the form of
neutrosophic relations.
Theory of rough set (RS) was proposed by Pawlak [9] in 1982. Rough set theory is used to study
the intelligence systems containing incomplete, uncertain or inexact information. The lower and upper
approximation operators of RSs are used for managing hidden information in a system. Feng et al. [10]
took a significant step to introduce parametrization tools in RSs. Meng et al. [11] provide further
discussion of the combination of SSs, RSs and FSs. The existing results of RSs and other extended
RSs such as rough fuzzy sets, generalized rough fuzzy sets, soft fuzzy rough sets and intuitionistic
fuzzy rough sets based decision-making models have their advantages and limitations [12,13].
In a different way, rough set approximations have been constructed into the intuitionistic fuzzy
environment and are known as intuitionistic fuzzy rough sets and rough intuitionistic fuzzy sets [14,15].
Zhang et al. [16,17] presented the notions of soft rough sets, soft rough intuitionistic fuzzy sets,
intuitionistic fuzzy soft rough sets, its application in decision-making, and also introduced generalized
intuitionistic fuzzy soft rough sets. Broumi et al. [18,19] developed a hybrid structure by combining

Axioms 2018, 7, 14; doi:10.3390/axioms7010014 96 www.mdpi.com/journal/axioms


Axioms 2018, 7, 14

RSs and NSs, called RNSs, they also presented interval valued neutrosophic soft rough sets by
combining interval valued neutrosophic soft sets and RSs. Yang et al. [20] proposed single valued
neutrosophic rough sets (SVNRSs) by combining SVNSs and RSs and defined SVNRSs on two universes
and established an algorithm for a decision-making problem based on SVNRSs on two universes.
Akram and Nawaz [21] have introduced the concept of soft graphs and some operation on soft
graphs. Certain concepts of fuzzy soft graphs and intuitionistic fuzzy soft graphs are discussed
in [22–24]. Akram and Shahzadi [25] have introduced neutrosophic soft graphs. Zafar and Akram [26]
introduced a rough fuzzy digraph and several basic notions concerning rough fuzzy digraphs. In this
research paper, a neutrosophic soft rough set is a generalization of a neutrosophic rough set, and we
introduce the idea of neutrosophic soft rough graphs (NSRGs) that are made by combining NSRSs
with graphs and describe different methods of their construction. We consider the application of NSRG
in decision-making problems and resolve the problem. In particular, we develop efficient algorithms
to solve decision-making problems.
For other notations, terminologies and applications not mentioned in the paper, the readers are
referred to [27–35].

2. Neutrosophic Soft Rough Information


In this section, we will introduce the notions of neutrosophic soft rough relation (NSRR),
and NSRGs.

Definition 1. Let Y be an initial universal set, P a universal set of parameters and M Ď P. For an arbitrary
neutrosophic soft relation Q over Y ˆ M, pY, M, Qq is called neutrosophic soft approximation space (NSAS).
For any NS A P N pMq, we define the upper neutrosophic soft rough approximation (UNSRA) and the
lower neutrosophic soft rough approximation (LNSRA) operators of A with respect to pY, M, Qq denoted by
QpAq and QpAq, respectively as follows:

QpAq “ tpu, TQp Aq puq, IQp Aq puq, FQp Aq puqq | u P Yu,


QpAq “ tpu, TQp Aq puq, IQp Aq puq, FQp Aq puqq | u P Yu,

where
ł` ˘ ľ` ˘
TQp Aq puq “ TQp Aq pu, eq ^ TA peq , IQp Aq puq “ IQp Aq pu, eq _ I A peq ,
ePM ePM
ľ` ˘ ľ` ˘
FQp Aq puq “ FQp Aq pu, eq _ FA peq ; TQp Aq puq “ FQp Aq pu, eq _ TA peq ,
ePM ePM
ł` ˘ ł` ˘
IQp Aq puq “ p1 ´ IQp Aq pu, eqq ^ I A peq , FQp Aq puq “ TQp Aq pu, eq ^ FA peq .
ePM ePM

The pair pQpAq, QpAqq is called NSRS of A w.r.t pY, M, Qq, Q and Q are referred to as the LNSRA and
the UNSRA operators, respectively.

Example 1. Suppose that Y “ tw1 , w2 , w3 , w4 u is the set of careers under consideration, and Mr. X wants to
select the best suitable career. M “ te1 , e2 , e3 u is a set of decision parameters. Mr. X describes the “most suitable
career" by defining a neutrosophic soft set pQ, Mq on Y that is a neutrosophic relation from Y to M as shown in
Table 1.

Table 1. Neutrosophic soft relation Q.

Q w1 w2 w3 w4
e1 p0.3, 0.4, 0.5q p0.4, 0.2, 0.3q p0.1, 0.5, 0.4q p0.2, 0.3, 0.4q
e2 p0.1, 0.5, 0.4q p0.3, 0.4, 0.6q p0.4, 0.4, 0.3q p0.5, 0.3, 0.8q
e3 p0.3, 0.4, 0.4q p0.4, 0.6, 0.7q p0.3, 0.5, 0.4q p0.5, 0.4, 0.6q

97
Axioms 2018, 7, 14

Now, Mr. X gives the most favorable decision object A, which is an NS on M defined as follows: A “
tpe1 , 0.5, 0.2, 0.4q, pe2 , 0.2, 0.3, 0.1q, pe3 , 0.2, 0.4, 0.6qu. By Definition 1, we have

TQp Aq pw1 q “ 0.3, IQp Aq pw1 q “ 0.4, FQp Aq pw1 q “ 0.4,

TQp Aq pw2 q “ 0.4, IQp Aq pw2 q “ 0.2, FQp Aq pw2 q “ 0.4,

TQp Aq pw3 q “ 0.2, IQp Aq pw3 q “ 0.4, FQp Aq pw3 q “ 0.3,

TQp Aq pw4 q “ 0.2, IQp Aq pw4 q “ 0.3, FQp Aq pw4 q “ 0.4.

Similarly,
TQp Aq pw1 q “ 0.4, IQp Aq pw1 q “ 0.4, FQp Aq pw1 q “ 0.3,

TQp Aq pw2 q “ 0.5, IQp Aq pw2 q “ 0.4, FQp Aq pw2 q “ 0.4,

TQp Aq pw3 q “ 0.4, IQp Aq pw3 q “ 0.4, FQp Aq pw3 q “ 0.3,

TQp Aq pw4 q “ 0.5, IQp Aq pw4 q “ 0.4, FQp Aq pw4 q “ 0.5.

Thus, we obtain

QpAq “ tpw1 , 0.3, 0.4, 0.4q, pw2 , 0.4, 0.2, 0.4q, pw3 , 0.2, 0.4, 0.3q, pw4 , 0.2, 0.3, 0.4qu,
QpAq “ tpw1 , 0.4, 0.4, 0.3q, pw2 , 0.5, 0.4, 0.4q, pw3 , 0.4, 0.4, 0.3q, pw4 , 0.5, 0.4, 0.5qu.

Hence, pQpAq, QpAqq is an NSRS of A.

The conventional neutrosophic soft set is a mapping from a parameter to the neutrosophic
subset of the universe and letting pQ, Mq be neutrosophic soft set. Now, we present the constructive
definition of neutrosophic soft rough relation by using a neutrosphic soft relation S from M ˆ M “ Ḿ
to N pY ˆ Y “ Ýq, where Y is a universal set and M be a set of parameters.

Definition 2. A neutrosophic soft rough relation pSpBq, SpBqq on Y is an NSRS, S : Ḿ Ñ N pÝq is a


neutrosophic soft relation on Y defined by

Spei e j q “ tui u j | Dui P Qpei q, u j P Qpe j qu, ui u j P Ý, such that


TS pui u j , ei e j q ď mintTQ pui , ei q, TQ pu j , e j qu
IS pui u j , ei e j q ď maxtIQ pui , ei q, IQ pu j , e j qu
FS pui u j , ei e j q ď maxtFQ pui , ei q, FQ pu j , e j qu.
For any B P N pḾq, B “ t ei e j , TB pei e j q, IB pei e j q, FB pei e j q ui u j P Ḿu,
` ˘

TB pei e j q ď mintTA pei q, TA pe j qu,


IB pei e j q ď maxtI A pei q, I A pe j qu,
FB pei e j q ď maxtFA pei q, FA pe j qu.

The UNSA and the LNSA of B w.r.t pÝ, Ḿ, Sq are defined as follows:

SpBq “ tpui u j , TSpBq pui u j q, ISpBq pui u j q, FSpBq pui u j qq | ui u j P Ýu,

SpBq “ tpui u j , TSpBq pui u j q, ISpBq pui u j q, FSpBq pui u j qq | ui u j P Ýu,

98
Axioms 2018, 7, 14

where
ł ` ˘
TSpBq pui u j q “ TS pui u j , ei e j q ^ TB pei e j q ,
ei e j PḾ
ľ ` ˘
ISpBq pui u j q “ IS pui u j , ei e j q _ IB pei e j q ,
ei e j PM̃
ľ ` ˘
FSpBq pui u j q “ FS pui u j , ei e j q _ FB pei e j q ;
ei e j PM̃

ľ ` ˘
TSpBq pui u j q “ FS pui u j , ei e j q _ TB pei e j q ,
ei e j PḾ
ł ` ˘
ISpBq pui u j q “ p1 ´ IS pui u j , ei e j qq ^ IB pei e j q ,
ei e j PM̃
ł ` ˘
FSpBq pui u j q “ TS pui u j , ei e j q ^ FB pei e j q .
ei e j PM̃

The pair pSpBq, SpBqq is called NSRR and S, S : N pḾq Ñ N pÝq are called the LNSRA and the UNSRA
operators, respectively.

Remark 1. Consider an NS B on Ḿ and an NS A on M, according to the definition of NSRR, we get

TSpBq pui u j q ď mintTSp Aq pui q, TSp Aq pu j qu,


ISpBq pui u j q ď maxtISp Aq pui q, ISp Aq pu j qu,
FSpBq pui u j q ď maxtFSp Aq pui q.FSp Aq pu j qu.

Similarly, for LNSRA operator SpBq,

TSpBq pui u j q ď mintTSp Aq pui q, TSp Aq pu j qu,


ISpBq pui u j q ď maxtISp Aq pui q, ISp Aq pu j qu,
FSpBq pui u j q ď maxtFSp Aq pui q.FSp Aq pu j qu.

Example 2. Let Y “ tu1 , u2 , u3 u be a universal set and M “ te1 , e2 , e3 u a set of parameters. A neutrosophic
soft set pQ, Mq on Y can be defined in tabular form in Table 2 as follows:

Table 2. Neutrosophic soft set pQ, Mq.

Q u1 u2 u3
e1 p0.4, 0.5, 0.6q p0.7, 0.3, 0.2q p0.6, 0.3, 0.4q
e2 p0.5, 0.3, 0.6q p0.3, 0.4, 0.3q p0.7, 0.2, 0.3q
e3 p0.7, 0.2, 0.3q p0.6, 0.5, 0.4q p0.7, 0.2, 0.4q

Let E “ tu1 u2 , u2 u3 , u2 u2 , u3 u2 u Ď Ý and L “ te1 e3 , e2 e1 , e3 e2 u Ď Ḿ.


Then, a soft relation S on E (from L to E) can be defined in Table 3 as follows:

99
Axioms 2018, 7, 14

Table 3. Neutrosophic soft relation S.

S u1 u2 u2 u3 u2 u2 u3 u2
e1 e3 p0.4, 0.4, 0.5q p0.6, 0.3, 0.4q p0.5, 0.4, 0.2q p0.5, 0.4, 0.3q
e2 e1 p0.3, 0.3, 0.4q p0.3, 0.2, 0.3q p0.2, 0.3, 0.3q p0.7, 0.2, 0.2q
e3 e2 p0.3, 0.3, 0.2q p0.5, 0.3, 0.2q p0.2, 0.4, 0.4q p0.3, 0.4, 0.4q

Let A “ tpe1 , 0.2, 0.4, 0.6q, pe2 , 0.4, 0.5, 0.2q, pe3 , 0.1, 0.2, 0.4qu be an NS on M, then
SpAq “ tpu1 , 0.4, 0.2, 0.4q, pu2 , 0.3, 0.4, 0.3q, pu3 , 0.4, 0.2, 0.3qu,
SpAq “ tpu1 , 0.3, 0.5, 0.4q, pu2 , 0.2, 0.5, 0.6q, pu3 , 0.4, 0.5, 0.6qu.
Let B “ tpe1 e3 , 0.1, 0.3, 0.5q, pe2 e1 , 0.2, 0.4, 0.3q, pe3 e2 , 0.1, 0.2, 0.3qu be an NS on L, then
SpBq “ tpu1 u2 , 0.2, 0.3, 0.3q, pu2 u3 , 0.2, 0.3, 0.3q, pu2 u2 , 0.2, 0.4, 0.3q, pu3 u2 , 0.2, 0.4, 0.3qu,
SpBq “ tpu1 u2 , 0.2, 0.4, 0.4q, pu2 u3 , 0.2, 0.4, 0.5q, pu2 u2 , 0.3, 0.4, 0.5q, pu3 u2 , 0.2, 0.4, 0.5qu.
Hence, SpBq “ pSpBq, SpBqq is NSRR.

Definition 3. A neutrosophic soft rough graph (NSRG) on a non-empty V is an 4-ordered tuple


pV, M, QpAq, SpBqq such that
(i) M is a set of parameters,
(ii) Q is an arbitrary neutrosophic soft relation over V ˆ M,
(iii) S is an arbitrary neutrosophic soft relation over V́ ˆ Ḿ,
(vi) QpAq “ pQA, QAq is an NSRS of A,
(v) SpBq “ pSB, SBq is an NSRR on V́ Ă V ˆ V,
(iv) G “ pQpAq, SpBqq is a neutrosophic soft rough graph, where G “ pQA, SBq and G “ pQA, SBq are lower
neutrosophic approximate graph (LNAG) and upper neutrosophic approximate graph (UNAG), respectively
of neutrosophic soft rough graph (NSRG) G “ pQpAq, SpBqq.

Example 3. Let V “ tv1 , v2 , v3 , v4 , v5 , v6 u be a vertex set and M “ te1 , e2 , e3 u a set of parameters. A neutrosophic
soft relation over V ˆ M can be defined in tabular form in Table 4 as follows:

Table 4. Neutrosophic soft relation Q.

Q v1 v2 v3 v4 v5 v6
e1 p0.4, 0.5, 0.6q p0.7, 0.3, 0.5q p0.6, 0.2, 0.3q p0.4, 0.4, 0.2q p0.5, 0.5, 0.6q p0.4, 0.5, 0.6q
e2 p0.5, 0.4, 0.2q p0.6, 0.4, 0.5q p0.7, 0.3, 0.4q p0.5, 0.3, 0.2q p0.4, 0.5, 0.4q p0.6, 0.5, 0.4q
e3 p0.5, 0.4, 0.1q p0.6, 0.3, 0.2q p0.5, 0.4, 0.3q p0.6, 0.2, 0.3q p0.5, 0.4, 0.4q p0.7, 0.3, 0.5q

Let A “ tpe1 , 0.5, 0.4, 0.6q, pe2 , 0.7, 0.4, 0.5q, pe3 , 0.6, 0.2, 0.5qu be an NS on M, then

SpAq “ tpv1 , 0.5, 0.4, 0.5q, pv2 , 0.6, 0.3, 0.5q, pv3 , 0.7, 0.4, 0.5q, pv4 , 0.6, 0.2, 0.5q, pv5 , 0.5,
0.4, 0.5q, pv6 , 0.6, 0.3, 0.5qu,
SpAq “ tpv1 , 0.6, 0.4, 0.5q, pv2 , 0.5, 0.4, 0.6q, pv3 , 0.5, 0.4, 0.6q, pv4 , 0.5, 0.4, 0.5q, pv5 , 0.6,
0.4, 0.5q, pv6 , 0.6, 0.4, 0.5qu.

Let E “ tv1 v1 , v1 v2 , v2 v1 , v2 v3 , v4 v5 , v3 v4 , v5 v2 , v5 v6 u Ď V́ and L “ te1 e3 , e2 e1 , e3 e2 u Ď Ḿ.


Then, a neutrosophic soft relation S on E (from L to E) can be defined in Tables 5 and 6 as follows:

Table 5. Neutrosophic soft relation S.

S v1 v1 v1 v2 v2 v1 v2 v3
e1 e2 p0.4, 0.4, 0.2q p0.4, 0.4, 0.5q p0.4, 0.4, 0.5q p0.6, 0.3, 0.4q
e2 e3 p0.5, 0.4, 0.1q p0.4, 0.3, 0.2q p0.4, 0.3, 0.2q p0.5, 0.3, 0.2q
e1 e3 p0.4, 0.4, 0.1q p0.4, 0.2, 0.2q p0.4, 0.2, 0.2q p0.5, 0.3, 0.3q

100
Axioms 2018, 7, 14

Table 6. Neutrosophic soft relation S.

S v3 v4 v4 v5 v5 v2 v5 v6
e1 e2 p0.4, 0.2, 0.2q p0.4, 0.4, 0.2q p0.4, 0.3, 0.4q p0.3, 0.2, 0.3q
e2 e3 p0.6, 0.2, 0.4q p0.3, 0.2, 0.1q p0.4, 0.3, 0.2q p0.4, 0.3, 0.4q
e1 e3 p0.4, 0.2, 0.3q p0.4, 0.3, 0.1q p0.5, 0.3, 0.2q p0.5, 0.3, 0.5q

Let B “ tpe1 e2 , 0.4, 0.4, 0.5; q, pe2 e3 , 0.5, 0.4, 0.5q, pe1 e3 , 0.5, 0.2, 0.5qu be an NS on L, then
SB “ tpv1 v1 , 0.5, 0.4, 0.5q, pv1 v2 , 0.4, 0.2, 0.5q, pv2 v1 , 0.4, 0.2, 0.5q, pv2 v3 , 0.5, 0.3, 0.5q,
pv3 v4 , 0.5, 0.2, 0.5q, pv4 v5 , 0.4, 0.3, 0.5q, pv5 v2 , 0.5, 0.3, 0.5q, pv5 v6 , 0.5, 0.3, 0.5qu,
SB “ tpv1 v1 , 0.4, 0.4, 0.5qpv1 v2 , 0.5, 0.4, 0.4q, pv2 v1 , 0.5, 0.4, 0.4q, pv2 v3 , 0.4, 0.4, 0.5q,
pv3 v4 , 0.4, 0.4, 0.5q, pv4 v5 , 0.4, 0.4, 0.4q, pv5 v2 , 0.4, 0.4, 0.5q, pv5 v6 , 0.4, 0.4, 0.5qu.
Hence, SpBq “ pSB, SBq is NSRR on V́.

Thus, G “ pQA, SBq and G “ pQA, SBq are LNAG and UNAG, respectively, are shown in Figure 1.

Figure 1. Neutrosophic soft rough graph G “ pG, Gq

Hence, G “ pG, Gq is NSRG.

Definition 4. Let G “ pV, M, Q, Sq be a neutrosophic soft rough graph on a non-empty set V. The order of G
can be denoted by OpGq, defined by

OpGq “ OpGq ` OpGq, where


ÿ ÿ
OpGq “ QApvq, OpGq “ QApvq.
v PV v PV

The size of neutrosophic soft rough graph G, denoted by SpGq, defined by

SpGq “ pSG ` SGq, where


ÿ ÿ
SpGq “ SBpuvq, SpGq “ SBpuvq.
uvPE uvPE

Example 4. Let G be a neutrosophic soft rough graph as shown in Figure 1. Then,

OpGq “ p3.5, 2.0, 3.0q, OpGq “ p3.3, 2.4, 3.2q,


OpGq “ OpGq ` OpGq “ p6.8, 4.4, 6.2q, and
SpGq “ p3.2, 1.8, 3.0q SpGq “ p2.5, 2.4, 2.8q
SpGq “ SpGq ` SpGq “ p5.7, 4.2, 5.8q.

101
Axioms 2018, 7, 14

Definition 5. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two neutrosophic soft rough graphs on V. The union
of G1 and G2 is a neutrosophic soft rough graph G “ G1 Y G2 “ pG1 Y G2 , G1 Y G2 q, where G1 Y G2 “
pQA1 Y QA2 , SB1 Y SB2 q and G1 Y G2 “ pQA1 Y QA2 , SB1 Y SB2 q are neutrosophic graphs, such that
(i) @v P QA1 but v R QA2 .
TQA pvq “TQA pvq, TQA1 YQA2 pvq “ TQA1 pvq,
1 YQA2 1

IQA pvq “IQA pvq, IQA1 YQA2 pvq “ IQA1 pvq,


1 YQA2 1

FQA pvq “FQA pvq, FQA1 YQA2 pvq “ FQA1 pvq.


1 YQA2 1

(ii) @v R QA1 but v P QA2 .

TQA pvq “TQA2 pvq, TQA1 YQA2 pvq “ TQA2 pvq,


1 YQA2

IQA pvq “IQA2 pvq, IQA1 YQA2 pvq “ IQA2 pvq,


1 YQA2

FQA pvq “FQA2 pvq, FQA1 YQA2 pvq “ FQA2 pvq.


1 YQA2

(iii) @v P QA1 X QA2

TQA pvq “ maxtTQA pvq, TQA2 pvqu, TQA1 YQA2 pvq “ maxtTQA1 pvq, TQA2 pvqu,
1 YQA2 1

IQA pvq “ mintIQA pvq, IQA2 pvqu, IQA1 YQA2 pvq “ mintIQA1 pvq, IQA2 pvqu,
1 YQA2 1

FQA pvq “ mintFQA pvq, FQA2 pvqu, FQA1 YQA2 pvq “ mintFQA1 pvq, FQA2 pvqu.
1 YQA2 1

(iv) @vu P SB1 but vu R SB2 .

TSB pvuq “TSB pvuq, TSB1 YSB2 pvuq “ TSB1 pvuq,


1 YSB2 1

ISB YSB2 pvuq “ISB pvuq, ISB1 YSB2 pvuq “ ISB1 pvuq,
1 1

FSB YSB2 pvuq “FSB pvuq, FSB1 YSB2 pvuq “ FSB1 pvuq.
1 1

(v) @vu R SB1 but vu P SB2

TSB pvuq “TSB2 pvuq, TSB1 YSB2 pvuq “ TSB2 pvuq,


1 YSB2

ISB pvuq “ISB2 pvuq, ISB1 YSB2 pvuq “ ISB2 pvuq,


1 YSB2

FSB pvuq “FSB2 pvuq, FSB1 YSB2 pvuq “ FSB2 pvuq.


1 YSB2

(vi) @vu P SB1 X SB2

TSB pvuq “ maxtTSB pvuq, TSB2 pvuqu, TSB1 YSB2 pvuq “ maxtTS B1 pvuq, TSB2 pvuqu,
1 YSB2 1

ISB
1 YSB2 pvuq “ mintISB pvuq, ISB2 pvuqu, ISB1 YSB2 pvuq “ mintISB1 pvuq, ISB2 pvuqu,
1

FSB
1 YSB2 pvuq “ mintFSB pvuq, FSB2 pvuqu, FSB1 YSB2 pvuq “ mintFSB1 pvuq, FSB2 pvuqu.
1

Example 5. Let V “ tv1 , v2 , v3 , v4 u be a set of universes, and M “ te1 , e2 , e3 u a set of parameters. Then,
a neutrosophic soft relation over V ˆ M can be written as in Table 7.

Table 7. Neutrosophic soft relation Q.

Q v1 v2 v3 v4
e1 p0.5, 0.4, 0.3q p0.7, 0.6, 0.5q p0.7, 0.6, 0.4q p0.5, 0.7, 0.4q
e2 p0.3, 0.5, 0.6q p0.4, 0.5, 0.1q p0.3, 0.6, 0.5q p0.4, 0.8, 0.2q
e3 p0.7, 0.5, 0.8q p0.2, 0.3, 0.8q p0.7, 0.3, 0.5q p0.6, 0.4, 0.3q

102
Axioms 2018, 7, 14

Let A1 “ tpe1 , 0.5, 0.7, 0.8q, pe2 , 0.7, 0.5, 0.3q, pe3 , 0.4, 0.5, 0.3qu, and A2 “ tpe1 , 0.6, 0.3, 0.5q,
pe2 , 0.5, 0.8, 0.2q, pe3 , 0.5, 0.7, 0.2qu are two neutrosophic sets on M, Then, QpA1 q “ pQpA1 q, QpA1 qq and
QpA2 q “ pQpA2 q, QpA2 qq are NSRSs, where

QpA1 q “ tpv1 , 0.5, 0.6, 0.5q, pv2 , 0.5, 0.5, 0.7qpv3 , 0.5, 0.5, 0.7q, pv4 0.4, 0.5, 0.5qu,
QpA1 q “ tpv1 , 0.5, 0.5, 0.6q, pv2 , 0.5, 0.5, 0.3q, pv3 , 0.5, 0.5, 0.5q, pv4 0.5, 0.5, 0.3qu,
QpA2 q “ tpv1 , 0.6, 0.5, 0.5q, pv2 , 0.5, 0.7, 0.5q, pv3 , 0.5, 0.7, 0.5q, pv4 , 0.5, 0.6, 0.5qu,
QpA2 q “ tpv1 , 0.5, 0.4, 0.5q, pv2 , 0.6, 0.6, 0.2q, pv3 , 0.6, 0.6, 0.5q, pv4 , 0.5, 0.7, 0.2qu.

Let E “ tv1 v2 , v1 v4 , v2 v2 , v2 v3 , v3 v3 , v3 v4 u Ď V ˆ V, and L “ te1 e2 , e1 e3 , e2 e3 u Ă Ḿ. Then,


a neutrosophic soft relation on E can be written as in Table 8.

Table 8. Neutrosophic soft relation S.

S v1 v2 v1 v4 v2 v2 v2 v3 v3 v3 v3 v4
e1 e2 (0.3, 0.4 ,0.1) p0.4, 0.4, 0.2q p0.4, 0.5, 0.1q p0.3, 0.5, 0.4q p0.3, 0.4, 0.4q p0.4, 0.5, 0.2q
e1 e3 (0.2 ,0.3 ,0.3) p0.4, 0.3, 0.2q p0.2, 0.3, 0.5q p0.4, 0.3, 0.3q p0.5, 0.3, 0.3q p0.5, 0.4, 0.3q
e2 e3 (0.2,0.3,0.5) p0.3, 0.3, 0.3q p0.2, 0.3, 0.1q p0.4, 0.3, 0.1q p0.3, 0.3, 0.5q p0.3, 0.4, 0.3q

Let B1 “ tpe1 e2 , 0.5, 0.4, 0.5q, pe1 e3 , 0.3, 0.4, 0.5q, pe2 e3 , 0.4, 0.4, 0.3qu, and B2 “ tpe1 e2 , 0.5, 0.3, 0.2q,
pe1 e3 , 0.4, 0.3, 0.3q, pe2 e3 , 0.4, 0.6, 0.2qu are two neutrosophic sets on L, Then, SpB1 q “ pSpB1 q, SpB1 qq and
SpB2 q “ pSpB2 q, SpB2 qq are NSRRs, where

SpB1 q “ tpv1 v2 , 0.3, 0.4, 0.3q, pv1 v4 , 0.3, 0.4, 0.4q, pv2 v2 , 0.4, 0.4, 0.4q, pv2 v3 , 0.3, 0.4, 0.4q,
pv3 v3 , 0.3, 0.4, 0.5q, pv3 v4 , 0.3, 0.4, 0.5qu,
SpB1 q “ tpv1 v2 , 0.3, 0.4, 0.5q, pv1 v4 , 0.4, 0.4, 0.3q, pv2 v2 , 0.4, 0.4, 0.3q, pv2 v3 , 0.4, 0.4, 0.3q,
pv3 v3 , 0.3, 0.4, 0.5q, pv3 v4 , 0.4, 0.4, 0.3qu;
SpB2 q “ tpv1 v2 , 0.4, 0.6, 0.2q, pv1 v4 , 0.4, 0.6, 0.3q, pv2 v2 , 0.4, 0.6, 0.2q, pv2 v3 , 0.4, 0.6, 0.3q,
pv3 v3 , 0.4, 0.6, 0.3q, pv3 v4 , 0.4, 0.6, 0.3qu,
SpB2 q “ tpv1 v2 , 0.3, 0.3, 0.2q, pv1 v4 , 0.4, 0.3, 0.2q, pv2 v2 , 0.4, 0.3, 0.2q, pv2 v3 , 0.4, 0.3, 0.2q,
pv3 v3 , 0.4, 0.3, 0.3q, pv3 v4 , 0.4, 0.4, 0.2qu.

Thus, G1 “ pG1 , G1 q and G2 “ pG2 , G2 q are NSRGs, where G1 “ pQpA1 q, SpB1 qq, G1 “
pQpA1 q, SpB1 qq as shown in Figure 2.

Figure 2. Neutrosophic soft rough graph G1 “ pG1 , G1 q

G2 “ pQpA2 q, SpB2 qq, G2 “ pQpA2 q, SpB2 qq as shown in Figure 3.

103
Axioms 2018, 7, 14

Figure 3. Neutrosophic soft rough graph G2 “ pG2 , G2 q

The union of G1 “ pG1 , G1 q and G2 “ pG2 , G2 q is NSRG G “ G1 Y G2 “ pG1 Y G2 , G1 Y G2 q as shown


in Figure 4.

Figure 4. Neutrosophic soft rough graph G1 Y G2 “ pG1 Y G2 , G1 Y G2 q

Definition 6. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two NSRGs on V. The intersection of G1 and G2 is a
neutrosophic soft rough graph G “ G1 X G2 “ pG1 X G2 , G1 X G2 q, where G1 X G2 “ pQA1 X QA2 , SB1 X
SB2 q and G1 X G2 “ pQA1 X QA2 , SB1 X SB2 q are neutrosophic graphs, respectively, such that

(i) @v P QA1 but v R QA2 .

TQA pvq “TQA pvq, TQA1 XQA2 pvq “ TQA1 pvq,


1 XQA2 1

IQA pvq “IQA pvq, IQA1 XQA2 pvq “ IQA1 pvq,


1 XQA2 1

FQA pvq “FQA pvq, FQA1 XQA2 pvq “ FQA1 pvq.


1 XQA2 1

(ii) @v R QA1 but v P QA2 .

TQA pvq “TQA2 pvq, TQA1 XQA2 pvq “ TQA2 pvq,


1 XQA2

IQA pvq “IQA2 pvq, IQA1 XQA2 pvq “ IQA2 pvq,


1 XQA2

FQA pvq “FQA2 pvq, FQA1 XQA2 pvq “ FQA2 pvq.


1 XQA2

104
Axioms 2018, 7, 14

(iii) @v P QA1 X QA2

TQA pvq “ mintTQA pvq, TQA2 pvqu, TQA1 XQA2 pvq “ mintTQA1 pvq, TQA2 pvqu,
1 XQA2 1

IQA pvq “ maxtIQA pvq, IQA2 pvqu, IQA1 XQA2 pvq “ maxtIQA1 pvq, IQA2 pvqu,
1 XQA2 1

FQA pvq “ maxtFQA pvq, FQA2 pvqu, FQA1 XQA2 pvq “ maxtFQA1 pvq, FQA2 pvqu.
1 XQA2 1

(iv) @vu P SB1 but vu R SB2 .

TSB pvuq “TSB pvuq, TSB1 XSB2 pvuq “ TSB1 pvuq,


1 XSB2 1

ISB XSB2 pvuq “ISB pvuq, ISB1 XSB2 pvuq “ ISB1 pvuq,
1 1

FSB XSB2 pvuq “FSB pvuq, FSB1 XSB2 pvuq “ FSB1 pvuq.
1 1

(v) @vu R SB1 but vu P SB2

TSB pvuq “TSB2 pvuq, TSB1 XSB2 pvuq “ TSB2 pvuq,


1 XSB2

ISB pvuq “ISB2 pvuq, ISB1 XSB2 pvuq “ ISB2 pvuq,


1 XSB2

FSB pvuq “FSB2 pvuq, FSB1 XSB2 pvuq “ FSB2 pvuq.


1 XSB2

(vi) @vu P SB1 X SB2

TSB pvuq “ mintTSB pvuq, TSB2 pvuqu, TSB1 XSB2 pvuq “ mintTS B1 pvuq, TSB2 pvuqu,
1 XSB2 1

ISB pvuq “ maxtISB pvuq, ISB2 pvuqu, ISB1 XSB2 pvuq “ maxtISB1 pvuq, ISB2 pvuqu,
1 XSB2 1

FSB pvuq “ maxtFSB pvuq, FSB2 pvuqu, FSB1 XSB2 pvuq “ maxtFSB1 pvuq, FSB2 pvuqu.
1 XSB2 1

Definition 7. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two neutrosophic soft rough graphs on V. The join
of G1 and G2 is a neutrosophic soft rough graph G “ G1 ` G2 “ pG1 ` G2 , G1 ` G2 q, where G1 ` G2 “
pQA1 ` QA2 , SB1 ` SB2 q and G1 ` G2 “ pQA1 ` QA2 , SB1 ` SB2 q are neutrosophic graph, respectively,
such that

(i) @v P QA1 but v R QA2 .

TQA pvq “TQA pvq, TQA1 `QA2 pvq “ TQA1 pvq,


1 `QA2 1

IQA pvq “IQA pvq, IQA1 `QA2 pvq “ IQA1 pvq,


1 `QA2 1

FQA pvq “FQA pvq, FQA1 `QA2 pvq “ FQA1 pvq.


1 `QA2 1

(ii) @v R QA1 but v P QA2 .

TQA pvq “TQA2 pvq, TQA1 `QA2 pvq “ TQA2 pvq,


1 `QA2

IQA pvq “IQA2 pvq, IQA1 `QA2 pvq “ IQA2 pvq,


1 `QA2

FQA pvq “FQA2 pvq, FQA1 `QA2 pvq “ FQA2 pvq.


1 `QA2

(iii) @v P QA1 X QA2

TQA pvq “ maxtTQA pvq, TQA2 pvqu, TQA1 `QA2 pvq “ maxtTQA1 pvq, TQA2 pvqu,
1 `QA2 1

IQA
1 `QA2 pvq “ mintIQA pvq, IQA2 pvqu, IQA1 `QA2 pvq “ mintIQA1 pvq, IQA2 pvqu,
1

FQA pvq “ mintFQA pvq, FQA2 pvqu, FQA1 `QA2 pvq “ mintFQA1 pvq, FQA2 pvqu.
1 `QA2 1

105
Axioms 2018, 7, 14

(iv) @vu P SB1 but vu R SB2 .

TSB pvuq “TSB pvuq, TSB1 `SB2 pvuq “ TSB1 pvuq,


1 `SB2 1

ISB pvuq “ISB pvuq, ISB1 `SB2 pvuq “ ISB1 pvuq,


1 `SB2 1

FSB pvuq “FSB pvuq, FSB1 `SB2 pvuq “ FSB1 pvuq.


1 `SB2 1

(v) @vu R SB1 but vu P SB2

TSB pvuq “TSB2 pvuq, TSB1 `SB2 pvuq “ TSB2 pvuq,


1 `SB2

ISB pvuq “ISB2 pvuq, ISB1 `SB2 pvuq “ ISB2 pvuq,


1 `SB2

FSB pvuq “FSB2 pvuq, FSB1 `SB2 pvuq “ FSB2 pvuq.


1 `SB2

(vi) @vu P SB1 X SB2

TSB pvuq “ maxtTSB pvuq, TSB2 pvuqu, TSB1 `SB2 pvuq “ maxtTS B1 pvuq, TSB2 pvuqu,
1 `SB2 1

ISB
1 `SB2 pvuq “ mintISB pvuq, ISB2 pvuqu, ISB1 `SB2 pvuq “ mintISB1 pvuq, ISB2 pvuqu,
1

FSB pvuq “ mintFSB pvuq, FSB2 pvuqu, FSB1 `SB2 pvuq “ mintFSB1 pvuq, FSB2 pvuqu.
1 `SB2 1

(vii) @vu P Ẽ, where Ẽ is the set of edges joining vertices of QA1 and QA2 .

TSB pvuq “ mintTQA pvq, TQA2 puqu, TSB1 `SB2 pvuq “ mintTQA1 pvq, TQA2 puqu,
1 `SB2 1

ISB
1 `SB2 pvuq “ maxtIQA pvq, IQA2 puqu, ISB1 `SB2 pvuq “ maxtIQA1 pvq, IQA2 puqu,
1

FSB pvuq “ maxtFQA pvq, FQA2 puqu, FSB1 `SB2 pvuq “ maxtFQA1 pvq, FQA2 puqu.
1 `SB2 1

Definition 8. The Cartesian product of G1 and G2 is a G “ G1 ˙ G2 “ pG1 ˙ G2 , G1 ˙ G2 q, where G1 ˙ G2 “


pQA1 ˙ QA2 , SB1 ˙ SB2 q and G1 ˙ G2 “ pQA1 ˙ QA2 , SB1 ˙ SB2 q are neutrosophic digraph, such that

(i) @ pv1 , v2 q P QA1 ˆ QA2 .

TpQA pv1 , v2 q “ mintTQA pv1 q, TQA2 pv1 qu, TpQA1 ˙QA2 q pv1 , v2 q “ mintTQA1 pv1 q, TQA2 pv1 qu,
1 ˙QA2 q 1

IpQA pv1 , v2 q “ maxtIQA pv1 q, IQA2 pv1 qu, IpQA1 ˙QA2 q pv1 , v2 q “ maxtIQA1 pv1 q, IQA2 pv1 qu,
1 ˙QA2 q 1

FpQA pv1 , v2 q “ maxtFQA pv1 q, FQA2 pv1 qu, FpQA1 ˙QA2 q pv1 , v2 q “ maxtFQA1 pv1 q, FQA2 pv1 qu.
1 ˙QA2 q 1

(ii) @v1 v2 P SB2 , v P QA1 .


` ˘
TpSB pv, v1 qpv, v2 q “ mintTQA pvq, TSB2 pv1 v2 qu,
1 ˙SB2 q 1
` ˘
TpSB1 ˙SB2 q pv, v1 qpv, v2 q “ mintTQA1 pvq, TSB2 pv1 v2 qu,
` ˘
IpSB ˙SB2 q pv, v1 qpv, v2 q “ maxtIQA pvq, ISB2 pv1 v2 qu,
1 1
` ˘
IpSB1 ˙SB2 q pv, v1 qpv, v2 q “ maxtIQA1 pvq, ISB2 pv1 v2 qu,
` ˘
FpSB ˙SB2 q pv, v1 qpv, v2 q “ maxtFQA pvq, FSB2 pv1 v2 qu,
1 1
` ˘
FpSB1 ˙SB2 q pv, v1 qpv, v2 q “ maxtFQA1 pvq, FSB2 pv1 v2 qu.

106
Axioms 2018, 7, 14

(iii) @v1 v2 P SB1 , v P QA2 .


` ˘
TpSB1 ˙SB2 q pv1 , vqpv2 , vq “ mintTSB1 pv1 v2 q, TQA2 pvqu,
` ˘
TpSB ˙SB2 q pv1 , vqpv2 , vq “ mintTSB pv1 v2 q, TQA2 pvqu,
1 1
` ˘
IpSB ˙SB2 q pv1 , vqpv2 , vq “ maxtISB pv1 v2 q, IQA2 pvqu,
1 1
` ˘
IpSB1 ˙SB2 q pv1 , vqpv2 , vq “ maxtISB1 pv1 v2 q, IQA2 pvqu,
` ˘
FpSB ˙SB2 q pv1 , vqpv2 , vq “ maxtFSB pv1 v2 q, FQA2 pvqu,
1 1
` ˘
FpSB1 ˙SB2 q pv1 , vqpv2 , vq “ maxtFSB1 pv1 v2 q, FQA2 pvqu.

Definition 9. The cross product of G1 and G2 is a neutrosophic soft rough graph G “ G1 e G2 “ pG1 e
G2 , G1 e G2 q, where G1 e G2 “ pQA1 e QA2 , SB1 e SB2 q and G1 e G2 “ pQA1 e QA2 , SB1 e SB2 q are
neutrosophic graphs, respectively, such that

(i) @ pv1 , v2 q P QA1 ˆ QA2 .

TpQA pv1 , v2 q “ mintTQA pv1 q, TQA2 pv1 qu, TpQA1 eQA2 q pv1 , v2 q “ mintTQA1 pv1 q, TQA2 pv1 qu,
1 eQA2 q 1

IpQA
1 eQA2 q pv1 , v2 q “ maxtIQA pv1 q, IQA2 pv1 qu, IpQA1 eQA2 q pv1 , v2 q “ maxtIQA1 pv1 q, IQA2 pv1 qu,
1

FpQA pv1 , v2 q “ maxtFQA pv1 q, FQA2 pv1 qu, FpQA1 eQA2 q pv1 , v2 q “ maxtFQA1 pv1 q, FQA2 pv1 qu.
1 eQA2 q 1

(ii) @v1 u1 P SB1 , v2 u2 P SB2 .


` ˘
TpSB pv1 , v2 qpu1 , u2 q “ mintTSB pv1 u1 q, TSB2 pv1 u2 qu,
1 eSB2 q 1
` ˘
TpSB1 eSB2 q pv1 , v2 qpu1 , u2 q “ mintTSB1 pv1 u1 q, TSB2 pv1 u2 qu,
` ˘
IpSB eSB2 pv1 , v2 qpu1 , u2 q “ maxtISB pv1 u1 q, ISB2 pv1 u2 qu,
1 1
` ˘
IpSB1 eSB2 q pv1 , v2 qpu1 , u2 q “ maxtISB1 pv1 u1 q, ISB2 pv1 u2 qu,
` ˘
FpSB eSB2 pv1 , v2 qpu1 , u2 q “ maxtFSB pv1 u1 q, FSB2 pv1 u2 qu,
1 1
` ˘
FpSB1 eSB2 q pv1 , v2 qpu1 , u2 q “ maxtFSB1 pv1 u1 q, FSB2 pv1 u2 qu.

Definition 10. The rejection of G1 and G2 is a neutrosophic soft rough graph G “ G1 |G2 “ pG1 |G2 , G1 |G2 q,
where G1 |G2 “ pSA1 |SA2 , SB1 |SB2 q and G1 |G2 “ pSA1 |SA2 , SB1 |SB2 q are neutrosophic graphs such that

(i) @ pv1 , v2 q P QA1 ˆ QA2 .

TpQA pv1 , v2 q “ mintTQA pv1 q, TQA2 pv2 qu, TpQA1 |QA2 q pv1 , v2 q “ mintTQA1 pv1 q, TQA2 pv2 qu,
1 |QA2 q 1

IpQA pv1 , v2 q “ maxtIQA pv1 q, IQA2 pv2 qu, IpQA1 |QA2 q pv1 , v2 q “ maxtIQA1 pv1 q, IQA2 pv2 qu,
1 |QA2 q 1

FpQA pv1 , v2 q “ maxtFQA pv1 q, FQA2 pv2 qu, FpQA1 |QA2 q pv1 , v2 q “ maxtFQA1 pv1 q, FQA2 pv2 qu.
1 |QA2 q 1

(ii) @v2 u2 R SB2 , v P QA1 .


` ˘
TpSB pv, v2 qpv, u2 q “ mintTQA pvq, TQA2 pv2 q, TQA2 pu2 qu,
1 |SB2 q 1
` ˘
TpSB1 |QB2 q pv, v2 qpv, u2 q “ mintTQA1 pvq, TQA2 pv2 q, TQA2 pu2 qu,
` ˘
pISB |SB2 q pv, v2 qpv, u2 q “ maxtIQA pvq, IQA2 pv2 q, IQA2 pu2 qu,
1 1
` ˘
pISB1 |SB2 q pv, v2 qpv, u2 q “ maxtIQA1 pvq, IQA2 pv2 q, IQA2 pu2 qu,
` ˘
pFSB |SB2 q pv, v2 qpv, u2 q “ maxtFQA pvq, FQA2 pv2 q, FQA2 pu2 qu,
1 1
` ˘
pFSB1 |SB2 q pv, v2 qpv, u2 q “ maxtFQA1 pvq, FQA2 pv2 q, FQA2 pu2 qu.

107
Axioms 2018, 7, 14

(iii) @v1 u1 R SB1 , v P QA2 ,


` ˘
TpSB1 |SB2 q pv1 , vqpu1 , vq “ mintTQA1 pv1 q, TQA1 pu1 q, TQA2 pvqu,
` ˘
IpSB1 |SB2 q pv1 , vqpu1 , vq “ maxtIQA1 pv1 q, IQA1 pu1 q, IQA2 pvqu,
` ˘
FpSB1 |SB2 q pv1 , vqpu1 , vq “ maxtFQA1 pv1 q, FQA1 pu1 q, FQA2 pvqu,
` ˘
TpSB |SB2 q pv1 , vqpu1 , vq “ mintTQA pv1 q, TQA pu1 q, TQA2 pvqu,
1 1 1
` ˘
IpSB |SB2 q pv1 , vqpu1 , vq “ maxtIQA pv1 q, IQA pu1 q, IQA2 pvqu,
1 1 1
` ˘
FpSB |SB2 q pv1 , vqpu1 , vq “ maxtFQA pv1 q, FQA pu1 q, FQA2 pvqu.
1 1 1

(iv) @v1 u1 R SB1 , v2 u2 R SB2 , v1 “ u1 .


` ˘
TpSB1 |SB2 q pv1 , v2 qpu1 , u2 q “ mintTQA1 pv1 q, TQA1 pu1 q, TQA2 pv2 q, TQA2 pu2 qu,
` ˘
IpSB1 |SB2 q pv1 , v2 qpu1 , u2 q “ maxtIQA1 pv1 q, IQA1 pu1 q, IQA2 pv2 q, IQA2 pu2 qu,
` ˘
FpSB1 |SB2 q pv1 , v2 qpu1 , u2 q “ maxtFQA1 pv1 q, FQA1 pu1 q, FQA2 pv2 q, FQA2 pu2 qu,
` ˘
TpSB |SB2 q pv1 , v2 qpu1 , u2 q “ mintTQA pv1 q, TQA pu1 q, TQA2 pv2 q, TQA2 pu2 qu,
1 1 1
` ˘
IpSB |SB2 q pv1 , v2 qpu1 , u2 q “ maxtIQA pv1 q, IQA pu1 q, IQA2 pv2 q, IQA2 pu2 qu,
1 1 1
` ˘
FpSB |SB2 q pv1 , v2 qpu1 , u2 q “ maxtFQA pv1 q, FQA pu1 q, FQA2 pv2 q, FQA2 pu2 qu,
1 1 1

Example 6. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two neutrosophic soft rough graphs on V, where
G1 “ pQA1 , SB1 q and G1 “ pQA1 , SB1 q are neutrosophic graphs as shown in Figure 2 and G2 “ pQA2 , SB2 q
and G2 “ pQA2 , SB2 q are neutrosophic graphs as shown in Figure 3. The Cartesian product of G1 “ pG1 , G1 q
and G2 “ pG2 , G2 q is NSRG G “ G1 ˆ G2 “ pG1 ˆ G2 , G1 ˆ G2 q as shown in Figure 5.

(a)

Figure 5. Cont.

108
Axioms 2018, 7, 14

(b)

Figure 5. Cartesian product of two neutrosophic soft rough graphs G1 ˆ G2

Definition 11. The symmetric difference of G1 and G2 is a neutrosophic soft rough graph G “ G1 ‘ G2 “
pG1 ‘ G2 , G1 ‘ G2 q, where G1 ‘ G2 “ pQA1 ‘ QA2 , SB1 ‘ SB2 q and G1 ‘ G2 “ pQA1 ‘ QA2 , SB1 ‘ SB2 q
are neutrosophic graphs, respectively, such that

(i) @ pv1 , v2 q P QA1 ˆ QA2 .

TpQA pv1 , v2 q “ mintTQA pv1 q, TQA2 pv2 qu, TpQA1 ‘QA2 q pv1 , v2 q “ mintTQA1 pv1 q, TQA2 pv2 qu,
1 ‘QA2 q 1

IpQA
1 ‘QA2 q pv1 , v2 q “ maxtIQA pv1 q, IQA2 pv2 qu, IpQA1 ‘QA2 q pv1 , v2 q “ maxtIQA1 pv1 q, IQA2 pv2 qu,
1

FpQA
1 ‘QA2 q pv1 , v2 q “ maxtFQA pv1 q, FQA2 pv2 qu, FpQA1 ‘QA2 q pv1 , v2 q “ maxtFQA1 pv1 q, FQA2 pv2 qu.
1

(ii) @v1 v2 P SB2 , v P QA1 .


` ˘
TpSB pv, v1 qpv, v2 q “ mintTQA pvq, TSB2 pv1 v2 qu,
1 ‘SB2 q 1
` ˘
TpSB1 ‘SB2 q pv, v1 qpv, v2 q “ mintTQA1 pvq, TSB2 pv1 v2 qu,
` ˘
IpSB ‘SB2 q pv, v1 qpv, v2 q “ maxtIQA pvq, ISB2 pv1 v2 qu,
1 1
` ˘
IpSB1 ‘SB2 q pv, v1 qpv, v2 q “ maxtIQA1 pvq, ISB2 pv1 v2 qu,
` ˘
FpSB ‘SB2 q pv, v1 qpv, v2 q “ maxtFQA pvq, FSB2 pv1 v2 qu,
1 1
` ˘
FpSB1 ‘SB2 q pv, v1 qpv, v2 q “ maxtFQA1 pvq, FSB2 pv1 v2 qu.

109
Axioms 2018, 7, 14

(iii) @v1 v2 P SB1 , v P QA2 .


` ˘
TpSB pv1 , vqpv2 , vq “ mintTSB pv1 v2 q, TQA2 pvqu,
1 ‘SB2 q 1
` ˘
TpSB1 ‘SB2 q pv1 , vqpv2 , vq “ mintTSB1 pv1 v2 q, TQA2 pvqu,
` ˘
IpSB ‘SB2 q pv1 , vqpv2 , vq “ maxtISB pv1 v2 q, IQA2 pvqu,
1 1
` ˘
IpSB1 ‘SB2 q pv1 , vqpv2 , vq “ maxtISB1 pv1 v2 q, IQA2 pvqu,
` ˘
FpSB ‘SB2 q pv1 , vqpv2 , vq “ maxtFSB pv1 v2 q, FQA2 pvqu,
1 1
` ˘
FpSB1 ‘SB2 q pv1 , vqpv2 , vq “ maxtFSB1 pv1 v2 q, FQA2 pvqu.

(iv) @v1 u1 R SB1 , v2 u2 P SB2 .


` ˘
TpSB pv1 , v2 qpu1 , u2 q “ mintTSB pv1 u1 q, TQA2 pv2 q, TQA2 pu2 qu,
1 ‘SB2 q 1
` ˘
TpSB1 ‘SB2 q pv1 , v2 qpu1 , u2 q “ mintTSB1 pv1 u1 q, TQA2 pv2 q, TQA2 pu2 qu,
` ˘
IpSB ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtISB pv1 u1 q, IQA2 pv2 q, IQA2 pu2 qu,
1 1
` ˘
IpSB1 ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtISB1 pv1 u1 q, IQA2 pv2 q, IQA2 pu2 qu,
` ˘
FpSB ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtFSB pv1 u1 q, FQA2 pv2 q, FQA2 pu2 qu,
1 1
` ˘
FpSB1 ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtFSB1 pv1 u1 q, FQA2 pv2 q, FQA2 pu2 qu.

(v) @v1 u1 R SB1 , v2 u2 P SB2 .


` ˘
TpSB pv1 , v2 qpu1 , u2 q “ mintTQA pv1 q, TQA pu1 q, TSB2 pv2 u2 qu,
1 ‘SB2 q 1 1
` ˘
TpSB1 ‘SB2 q pv1 , v2 qpu1 , u2 q “ mintTQA1 pv1 q, TQA1 pu1 q, TSB2 pv2 u2 qu,
` ˘
IpSB ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtIQA pv1 q, IQA pu1 q, ISB2 pv2 u2 qu,
1 1 1
` ˘
IpSB1 ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtIQA1 pv1 q, IQA1 pu1 q, ISB2 pv2 u2 qu,
` ˘
FpSB ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtFQA pv1 q, FQA pu1 q, FSB2 pv2 u2 qu,
1 1 1
` ˘
FpSB1 ‘SB2 q pv1 , v2 qpu1 , u2 q “ maxtFQA1 pv1 q, FQA1 pu1 q, FSB2 pv2 u2 qu.

Example 7. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two neutrosophic soft rough graphs on V, where
G1 “ pQA1 , SB1 q and G1 “ pQA1 , SB1 q are neutrosophic graphs as shown in Figure 6 and G2 “ pQA2 , SB2 q
and G2 “ pQA2 , SB2 q are neutrosophic graphs as shown in Figure 7.

Figure 6. Neutrosophic soft rough graph G1 “ pG1 , G1 q

110
Axioms 2018, 7, 14

Figure 7. Neutrosophic soft rough graph G2 “ pG2 , G2 q

The symmetric difference of G1 and G2 is G “ G1 ‘ G2 “ pG1 ‘ G2 , G1 ‘ G2 q, where G1 ‘ G2 “


pQA1 ‘ QA2 , SB1 ‘ SB2 q and G1 ‘ G2 “ pQA1 ‘ QA2 , SB1 ‘ SB2 q are neutrosophic graphs as shown
in Figure 8.

Figure 8. Neutrosophic soft rough graph G1 ‘ G2 “ pG1 ‘ G2 , G1 ‘ G2 q

Definition 12. The lexicographic product of G1 and G2 is a neutrosophic soft rough graph G “ G1 d G2 “ pG1 ˚ d
G2 ˚ , G1˚ d G2˚ q, where G1 ˚ d G2 ˚ “ pQA1 d QA2 , SB1 d SB2 q and G1˚ d G2˚ “ pQA1 d QA2 , SB1 d SB2 q are
neutrosophic graphs, respectively, such that

(i) @ pv1 , v2 q P QA1 ˆ QA2 .

TpQA pv1 , v2 q “ mintTQA pv1 q, TQA2 pv2 qu, TpQA1 dQA2 q pv1 , v2 q “ mintTQA1 pv1 q, TQA2 pv2 qu,
1 dQA2 q 1

IpQA
1 dQA2 q pv1 , v2 q “ maxtIQA pv1 q, IQA2 pv2 qu, IpQA1 dQA2 q pv1 , v2 q “ maxtIQA1 pv1 q, IQA2 pv2 qu,
1

FpQA
1 dQA2 q pv1 , v2 q “ maxtFQA pv1 q, FQA2 pv2 qu, FpQA1 dQA2 q pv1 , v2 q “ maxtFQA1 pv1 q, FQA2 pv2 qu.
1

(ii) @v1 v2 P SB2 , v P QA1 .


` ˘
TpSB pv, v1 qpv, v2 q “ mintTQA pvq, TSB2 pv1 v2 qu,
1 dSB2 q 1
` ˘
TpSB1 dSB2 q pv, v1 qpv, v2 q “ mintTQA1 pvq, TSB2 pv1 v2 qu,
` ˘
IpSB dSB2 q pv, v1 qpv, v2 q “ maxtIQA pvq, ISB2 pv1 v2 qu,
1 1
` ˘
IpSB1 dSB2 q pv, v1 qpv, v2 q “ maxtIQA1 pvq, ISB2 pv1 v2 qu,
` ˘
FpSB dSB2 q pv, v1 qpv, v2 q “ maxtFQA pvq, FSB2 pv1 v2 qu,
1 1
` ˘
FpSB1 dSB2 q pv, v1 qpv, v2 q “ maxtFQA1 pvq, FSB2 pv1 v2 qu.

111
Axioms 2018, 7, 14

(iii) @v1 u1 P SB1 , v1 u2 P SB2 .


` ˘
TpSB pv1 , v1 qpu1 , u2 q “ mintTSB pv1 u1 q, TSB2 pv1 u2 qu,
1 dSB2 q 1
` ˘
TpSB1 dSB2 q pv1 , v1 qpu1 , u2 q “ mintTSB1 pv1 u1 q, TSB2 pv1 u2 qu,
` ˘
IpSB dSB2 pv1 , v1 qpu1 , u2 q “ maxtISB pv1 u1 q, ISB2 pv1 u2 qu,
1 1
` ˘
IpSB1 dSB2 q pv1 , v1 qpu1 , u2 q “ maxtISB1 pv1 u1 q, ISB2 pv1 u2 qu,
` ˘
FpSB dSB2 pv1 , v1 qpu1 , u2 q “ maxtFSB pv1 u1 q, FSB2 pv1 u2 qu,
1 1
` ˘
FpSB1 dSB2 q pv1 , v1 qpu1 , u2 q “ maxtFSB1 pv1 u1 q, FSB2 pv1 u2 qu.

Definition 13. The strong product of G1 and G2 is a neutrosophic soft rough graph G “ G1 b G2 “ pG1 ˚ b
G2 ˚ , G1˚ b G2˚ q, where G1 ˚ b G2 ˚ “ pQA1 b QA2 , SB1 b SB2 q and G1˚ b G2˚ “ pQA1 b QA2 , SB1 b SB2 q
are neutrosophic graphs, respectively, such that

(i) @ pv1 , v2 q P QA1 ˆ QA2 .

TpQA pv1 , v2 q “ mintTQA pv1 q, TQA2 pv2 qu, TpQA1 bQA2 q pv1 , v2 q “ mintTQA1 pv1 q, TQA2 pv2 qu,
1 bQA2 q 1

IpQA
1 bQA2 q pv1 , v2 q “ maxtIQA pv1 q, IQA2 pv2 qu, IpQA1 bQA2 q pv1 , v2 q “ maxtIQA1 pv1 q, IQA2 pv2 qu,
1

FpQA pv1 , v2 q “ maxtFQA pv1 q, FQA2 pv2 qu, FpQA1 bQA2 q pv1 , v2 q “ maxtFQA1 pv1 q, FQA2 pv2 qu.
1 bQA2 q 1

(ii) @v1 v2 P SB2 , v P QA1 .


` ˘
TpSB pv, v1 qpv, v2 q “ mintTQA pvq, TSB2 pv1 v2 qu,
1 bSB2 q 1
` ˘
TpSB1 bSB2 q pv, v1 qpv, v2 q “ mintTQA1 pvq, TSB2 pv1 v2 qu,
` ˘
IpSB bSB2 q pv, v1 qpv, v2 q “ maxtIQA pvq, ISB2 pv1 v2 qu,
1 1
` ˘
IpSB1 bSB2 q pv, v1 qpv, v2 q “ maxtIQA1 pvq, ISB2 pv1 v2 qu,
` ˘
FpSB bSB2 q pv, v1 qpv, v2 q “ maxtFQA pvq, FSB2 pv1 v2 qu,
1 1
` ˘
FpSB1 bSB2 q pv, v1 qpv, v2 q “ maxtFQA1 pvq, FSB2 pv1 v2 qu.

(iii) @v1 v2 P SB1 , v P QA2 .


` ˘
TpSB pv1 , vqpv2 , vq “ mintTSB pv1 v2 q, TQA2 pvqu,
1 bSB2 q 1
` ˘
TpSB1 bSB2 q pv1 , vqpv2 , vq “ mintTSB1 pv1 v2 q, TQA2 pvqu,
` ˘
IpSB bSB2 q pv1 , vqpv2 , vq “ maxtISB pv1 v2 q, IQA2 pvqu,
1 1
` ˘
IpSB1 bSB2 q pv1 , vqpv2 , vq “ maxtISB1 pv1 v2 q, IQA2 pvqu,
` ˘
FpSB bSB2 q pv1 , vqpv2 , vq “ maxtFSB pv1 v2 q, FQA2 pvqu,
1 1
` ˘
FpSB1 bSB2 q pv1 , vqpv2 , vq “ maxtFSB1 pv1 v2 q, FQA2 pvqu.

112
Axioms 2018, 7, 14

(iv) @v1 u1 P SB1 , v1 u2 P SB2 .


` ˘
TpSB pv1 , v1 qpu1 , u2 q “ mintTSB pv1 u1 q, TSB2 pv1 u2 qu,
1 bSB2 q 1
` ˘
TpSB1 bSB2 q pv1 , v1 qpu1 , u2 q “ mintTSB1 pv1 u1 q, TSB2 pv1 u2 qu,
` ˘
IpSB bSB2 pv1 , v1 qpu1 , u2 q “ maxtISB pv1 u1 q, ISB2 pv1 u2 qu,
1 1
` ˘
IpSB1 bSB2 q pv1 , v1 qpu1 , u2 q “ maxtISB1 pv1 u1 q, ISB2 pv1 u2 qu,
` ˘
FpSB bSB2 pv1 , v1 qpu1 , u2 q “ maxtFSB pv1 u1 q, FSB2 pv1 u2 qu,
1 1
` ˘
FpSB1 bSB2 q pv1 , v1 qpu1 , u2 q “ maxtFSB1 pv1 u1 q, FSB2 pv1 u2 qu.

Definition 14. The composition of G1 and G2 is a neutrosophic soft rough graph G “ G1 rG2 s “
pG1 ˚ rG2 ˚ s, G1˚ rG2˚ sq, where G1 ˚ rG2 ˚ s “ pQA1 rQA2 s, SB1 rSB2 sqs and G1˚ rG2˚ s “ pQA1 rQA2 s, SB1 rSB2 sq
are neutrosophic graphs, respectively, such that

(i) @pv1 , v2 q P QA1 ˆ QA2 .

TpQA pv1 , v2 q “ mintTQA pv1 q, TQA2 pv2 qu, TpQA1 ˆQA2 q pv1 , v2 q “ mintTQA1 pv1 q, TQA2 pv2 qu,
1 ˆQA2 q 1

IpQA
1 ˆQA2 q pv1 , v2 q “ maxtIQA pv1 q, IQA2 pv2 qu, IpQA1 ˆQA2 q pv1 , v2 q “ maxtIQA1 pv1 q, IQA2 pv2 qu,
1

FpQA pv1 , v2 q “ maxtFQA pv1 q, FQA2 pv2 qu, FpQA1 ˆQA2 q pv1 , v2 q “ maxtFQA1 pv1 q, FQA2 pv2 qu.
1 ˆQA2 q 1

(ii) @v1 v2 P SB2 , v P QA1 .


` ˘
TpSB pv, v1 qpv, v2 q “ mintTQA pvq, TSB2 pv1 v2 qu,
1 ˆSB2 q 1
` ˘
TpSB1 ˆSB2 q pv, v1 qpv, v2 q “ mintTQA1 pvq, TSB2 pv1 v2 qu,
` ˘
IpSB ˆSB2 q pv, v1 qpv, v2 q “ maxtIQA pvq, ISB2 pv1 v2 qu,
1 1
` ˘
IpSB1 ˆSB2 q pv, v1 qpv, v2 q “ maxtIQA1 pvq, ISB2 pv1 v2 qu,
` ˘
FpSB ˆSB2 q pv, v1 qpv, v2 q “ maxtFQA pvq, FSB2 pv1 v2 qu,
1 1
` ˘
FpSB1 ˆSB2 q pv, v1 qpv, v2 q “ maxtFQA1 pvq, FSB2 pv1 v2 qu.

(iii) @v1 v2 P SB1 , v P QA2 .


` ˘
TpSB pv1 , vqpv2 , vq “ mintTSB pv1 v2 q, TQA2 pvqu,
1 ˆSB2 q 1
` ˘
TpSB1 ˆSB2 q pv1 , vqpv2 , vq “ mintTSB1 pv1 v2 q, TQA2 pvqu,
` ˘
IpSB ˆSB2 q pv1 , vqpv2 , vq “ maxtISB pv1 v2 q, IQA2 pvqu,
1 1
` ˘
IpSB1 ˆSB2 q pv1 , vqpv2 , vq “ maxtISB1 pv1 v2 q, IQA2 pvqu,
` ˘
FpSB ˆSB2 q pv1 , vqpv2 , vq “ maxtFSB pv1 v2 q, FQA2 pvqu,
1 1
` ˘
FpSB1 ˆSB2 q pv1 , vqpv2 , vq “ maxtFSB1 pv1 v2 q, FQA2 pvqu.

113
Axioms 2018, 7, 14

(iv) @v1 u1 P SB1 , v1 ‰ u2 P QA2 .


` ˘
TpSB pv1 , v1 qpu1 , u2 q “ mintTSB pv1 u1 q, TSB2 pv1 u2 qu,
1 ˆSB2 q 1
` ˘
TpSB1 ˆSB2 q pv1 , v1 qpu1 , u2 q “ mintTSB1 pv1 u1 q, TSB2 pv1 u2 qu,
` ˘
IpSB ˆSB2 pv1 , v1 qpu1 , u2 q “ maxtISB pv1 u1 q, ISB2 pv1 u2 qu,
1 1
` ˘
IpSB1 ˆSB2 q pv1 , v1 qpu1 , u2 q “ maxtISB1 pv1 u1 q, ISB2 pv1 u2 qu,
` ˘
FpSB ˆSB2 pv1 , v1 qpu1 , u2 q “ maxtFSB pv1 u1 q, FSB2 pv1 u2 qu,
1 1
` ˘
FpSB1 ˆSB2 q pv1 , v1 qpu1 , u2 q “ maxtFSB1 pv1 u1 q, FSB2 pv1 u2 qu.

Definition 15. Let G “ pG, Gq be a neutrosophic soft rough graph. The complement of G, denoted by
´ is a neutrosophic soft rough graph, where Ǵ “ pQA,
Ǵ “ pǴ, Gq ´ SBq ´ “ pQA,
´ and G ´ are neutrosophic
´ SBq
graphs such that

(i) @v P QA.

´ pvq “ T ´ ´
TQA QApvq , IQA pvq “ IQApvq , FQA pvq “ FQApvq ,

TQA ´ pvq “ IQApvq , FQA


´ pvq “ TQApvq , IQA ´ pvq “ FQApvq .

(ii) @ v, u P QA.

´ pvuq “ mintT pvq, T puqu ´ T pvuq,


TSB QA QA SB
´ pvuq “ maxtI pvq, I puqu ´ I pvuq,
ISB QA QA SB
´ pvuq “ maxtF pvq, F puqu ´ F pvuq,
FSB QA QA SB
´ pvuq “ mintTQA pvq, TQA puqu ´ TSB pvuq,
TSB
´ pvuq “ maxtIQA pvq, IQA puqu ´ I pvuq,
ISB SB
´ pvuq “ maxtFQA pvq, FQA puqu ´ FSB pvuq.
FSB

Example 8. Consider an NSRGs G as shown in Figure 9.

Figure 9. Neutrosophic soft rough graph G “ pG, Gq

´ is obtained by using the Definition 15, where Ǵ “ pQA,


The complement of G is Ǵ “ pǴ, Gq ´ SBq
´ and
G ´ SBq
´ “ pQA, ´ are neutrosophic graphs as shown in Figure 10.

114
Axioms 2018, 7, 14

´
Figure 10. Neutrosophic soft rough graph Ǵ “ pǴ, Gq

Definition 16. A graph G is called self complement, if G “ Ǵ, i.e.,

(i) @v P QA.

´ pvq “ T ´ ´
TQA QApvq , IQA pvq “ IQApvq , FQA pvq “ FQApvq ,

TQA ´ pvq “ IQApvq , FQA


´ pvq “ TQApvq , IQA ´ pvq “ FQApvq .

(ii) @ v, u P QA.

´ pvuq “ T pvuq,
TSB ´ pvuq “ I pvuq,
ISB ´ pvuq “ F pvuq,
FSB
SB SB SB
´
TSB pvuq “ TSB pvuq, ´ pvuq “ ISB pvuq,
ISB ´ pvuq “ FSB pvuq.
FSB

Definition 17. A neutrosophic soft rough graph G is called strong neutrosophic soft rough graph if @uv P SB,

TSB pvuq “ mintTQA pvq, TQA puqu, ISB pvuq “ maxtIQA pvq, IQA puquq, FSB pvuq “ maxtFQA pvq, FQA puqu,
TSB pvuq “ mintTQA pvq, TQA puqu, ISB pvuq “ maxtIQA pvq, IQA puqu, FSB pvuq “ maxtFQA pvq, FQA puqu.

Example 9. Consider a graph G such that V “ tu, v, wu and E “ tuv, vw, wuu, as shown in Figure 11. Let
QA be a neutrosophic soft rough set of V and let SB be a neutrosophic soft rough set of E defined in the Tables 9
and 10, respectively.

Table 9. Neutrosophic soft rough set on V.

V QA QA
u p0.8, 0.5, 0.2q p0.7, 0.5, 0.2q
v p0.9, 0.5, 0.1q p0.7, 0.5, 0.2q
w p0.7, 0.5, 0.1q p0.7, 0.5, 0.2q

Table 10. Neutrosophic soft rough set on E.

E SB SB
uv p0.8, 0.5, 0.2q p0.7, 0.5, 0.2q
vw p0.7, 0.5, 0.1q p0.7, 0.5, 0.2q
wu p0.7, 0.5, 0.2q p0.7, 0.5, 0.2q

115
Axioms 2018, 7, 14

Figure 11. Strong neutrosophic soft rough graph G “ pQA, SBq

Hence, G “ pQA, SBq is a strong neutrosophic soft rough graph.

Definition 18. A neutrosophic soft rough graph G is called a complete neutrosophic soft rough graph if
@ vu P QA,

TSB pvuq “ mintTQA pvq, TQA puqu, ISB pvuq “ maxtIQA pvq, IQA puqu, FSB pvuq “ maxtFQA pvq, FQA puqu,
TSB pvuq “ mintTQA pvq, TQA puqu, ISB pvuq “ maxtIQA pvq, IQA puqu, FSB pvuq “ maxtFQA pvq, FQA puqu.

Remark 2. Every complete neutrosophic soft rough graph is a strong neutrosophic soft rough graph. However,
the converse is not true.

Definition 19. A neutrosophic soft rough graph G is isolated, if @x, y P QA.

TSB pvuq “ 0, ISB pvuq “ 0, FSB pvuq “ 0, TSB pvuq “ 0, ISB pvuq “ 0, FSB pvuq “ 0.

Theorem 1. The rejection of two neutrosophic soft rough graphs is a neutrosophic soft rough graph.

Proof. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two NSRGs. Let G “ G1 |G2 “ pG1 |G2 , G1 |G2 q be the
rejection of G1 and G2 , where G1 |G2 “ pQA1 |QA2 , SB1 |SB2 q and G1 |G2 “ pQA1 |QA2 , SB1 |SB2 q. We
claim that G “ G1 |G2 is a neutrosophic soft rough graph. It is enough to show that SB1 |SB2 and
SB1 |SB2 are neutrosophic relations on QA1 |QA2 and QA1 |QA2 , respectively. First, we show that
SB1 |SB2 is a neutrosophic relation on QA1 |QA2 .
If v P QA1 , v1 v2 R SB2 , then

TpSB1 |SB2 q ppv, v1 qpv, v2 qq “ pTQA1 pvq ^ pTQA2 pv2 q ^ TQA2 pv2 qqq
“ pTQA1 pvq ^ TQA2 pv2 qq ^ pTQA1 pvq ^ TQA2 pv2 qq
“ TpQA1 |QA2 q pv, v1 q ^ TpQA1 |QA2 q pv, v2 q
TpSB1 |SB2 q ppv, v1 qpv, v2 qq “ TpQA1 |QA2 q pv, v1 q ^ TpQA1 |QA2 q pv, v2 q
Similarly, IpSB1 |SB2 q ppv, v1 qpv, v2 qq “ IpQA1 |QA2 q pv, v1 q _ IpQA1 |QA2 q pv, v2 q
FpSB1 |SB2 q ppv, v1 qpv, v2 qq “ FpQA1 |QA2 q pv, v1 q _ FpQA1 |QA2 q pv, v2 q.

If v1 v2 R SB1 , v P QA2 , then

TpSB1 |SB2 q ppv1 , vqpv2 , vqq “ ppTQA1 pv1 q ^ TQA1 pv2 qq ^ TQA2 pvqq
“ ppTQA1 pv1 q ^ TQA2 pvqq ^ pTQA1 pv2 q ^ TQA2 pvqqq
“ TpQA1 |QA2 q pv1 , vq ^ TpQA1 |QA2 q pv2 , vq
TpSB1 |SB2 q ppv1 , vqpv2 , vqq “ TpQA1 |QA2 q pv1 , vq ^ TpQA1 |QA2 q pv2 , vq
Similarly, IpSB1 |SB2 q ppv1 , vqpv2 , vqq “ IpQA1 |QA2 q pv1 , vq _ IpQA1 |QA2 q pv2 , vq
FpSB1 |SB2 q ppv1 , vqpv2 , vqq “ FpQA1 |QA2 q pv1 , vq _ FpQA1 |QA2 q pv2 , vq.

116
Axioms 2018, 7, 14

If v1 v2 R SB1 , u1 , u2 R SB2 , then

TpSB1 |SB2 q ppv1 , u1 qpv2 , u2 qq “ ppTQA1 pv1 q ^ TQA1 pv2 qq ^ pTQA2 pu1 q ^ TQA2 pu2 qqq
“ pTQA1 pv1 q ^ TQA2 pu1 qq ^ pTQA1 pv2 q ^ TQA2 pu2 qq
“ TpQA1 |QA2 q pv1 , u1 q ^ TpQA1 |QA2 q pv2 , u2 q
TpSB1 |SB2 q ppv1 , u1 qpv2 , u2 qq “ TpQA1 |QA2 q pv1 , u1 q ^ TpQA1 |QA2 q pu1 , u2 q
Similarly, IpSB1 |SB2 q ppv1 , u1 qpv2 , u2 qq “ IpQA1 |QA2 q pv1 , u1 q _ IpQA1 |QA2 q pu1 , u2 q
FpSB1 |SB2 q ppv1 , u1 qpv2 , u2 qq “ FpQA1 |QA2 q pv1 , u1 q _ FpQA1 |QA2 q pu1 , u2 q.

Thus, SB1 |SB2 is a neutrosophic relation on QA1 |QA2 . Similarly, we can show that SB1 |SB2 is a
neutrosophic relation on QA1 |QA2 . Hence, G is a neutrosophic soft rough graph.

Theorem 2. The Cartesian product of two NSRGs is a neutrosophic soft rough graph.

Proof. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two NSRGs. Let G “ G1 ˙ G2 “ pG1 ˙ G2 , G1 ˙ G2 q


be the Cartesian product of G1 and G2 , where G1 ˙ G2 “ pQA1 ˙ QA2 , SB1 ˙ SB2 q and G1 ˙ G2 “
pQA1 ˙ QA2 , SB1 ˙ SB2 q. We claim that G “ G1 ˙ G2 is a neutrosophic soft rough graph. It is enough
to show that SB1 ˙ SB2 and SB1 ˙ SB2 are neutrosophic relations on QA1 ˙ QA2 and QA1 ˙ QA2 ,
respectively. We have to show that SB1 ˙ SB2 is a neutrosophic relation on QA1 ˙ QA2 .
If v P QA1 , v1 u1 P SB2 , then

TpSB1 ˙SB2 q ppv, v1 qpv, u1 qq “ TpQA1 q pvq ^ TpSB2 q pv1 u1 q


ď TpQA1 q pvq ^ pTpQA2 q pv1 q ^ TpQA2 qpu1 qq
“ pTpQA1 q pvq ^ TpQA2 q pv1 qq ^ pTpQA1 q pvq ^ TpQA2 q pu1 qq
“ TpQA1 ˙QA2 q pv, v1 q ^ TpQA1 ˙QA2 q pv, u1 q
TpSB1 ˙SB2 q ppv, v1 qpv, u1 qq ď TpQA1 ˙QA2 q pv, v1 q ^ TpQA1 ˙QA2 q pv, u1 q
Similarly, IpSB1 ˙SB2 q ppv, v1 qpv, u1 qq ď IpQA1 ˙QA2 q pv, v1 q _ IpQA1 ˙QA2 q pv, u1 q
FpSB1 ˙SB2 q ppv, v1 qpv, u1 qq ď FpQA1 ˙QA2 q pv, v1 q _ FpQA1 ˙QA2 q pv, u1 q.

If v1 u1 P SB1 , z P QA2 , then

TpSB1 ˙SB2 q ppv1 , zqpu1 , zqq “ TpSB1 q pv1 u1 q ^ TpQA2 q pzq


ď pTpQA1 qpv1 q^pQA1 q pu1 qq ^ TpQA2 q pzq
“ TpQA1 ˙QA2 q pv1 , zq ^ TpQA1 ˙QA2 q pu1 , zq
TpSB1 ˙SB2 q ppv1 , zqpu1 , zqq ď TpQA1 ˙QA2 q pv1 , zq ^ TpQA1 ˙QA2 q pu1 , zq
Similarly, IpSB1 ˙SB2 q ppv1 , zqpu1 , zqq ď IpQA1 ˙QA2 q pv1 , zq _ IpQA1 ˙QA2 q pu1 , zq
FpSB1 ˙SB2 q ppv1 , zqpu1 , zqq ď FpQA1 ˙QA2 q pv1 , zq _ FpQA1 ˙QA2 q pu1 , zq.

Therefore, SB1 ˙ SB2 is a neutrosophic relation on QA1 ˙ QA2 . Similarly, SB1 ˙ SB2 is a
neutrosophic relation on QA1 ˙ QA2 . Hence, G is a neutrosophic rough graph.

Theorem 3. The cross product of two neutrosophic soft rough graphs is a neutrosophic soft rough graph.

Proof. Let G1 “ pG1 , G1 q and G2 “ pG2 , G2 q be two NSRGs. Let G “ G1 e G2 “ pG1 e G2 , G1 e G2 q


be the cross product of G1 and G2 , where G1 e G2 “ pQA1 e QA2 , SB1 e SB2 q and G1 e G2 “ pQA1 e
QA2 , SB1 e SB2 q. We claim that G “ G1 e G2 is a neutrosophic soft rough graph. It is enough to show
that SB1 e SB2 and SB1 e SB2 are neutrosophic relations on QA1 e QA2 and QA1 e QA2 , respectively.
First, we show that SB1 e SB2 is a neutrosophic relation on QA1 e QA2 .

117
Axioms 2018, 7, 14

If v1 u1 P SB1 , v1 u2 P SB2 , then

TpSB1 eSB2 q ppv1 , v1 qpu1 , u2 qq “ TpSB1 q pv1 u1 q ^ TpSB2 q pv1 u2 q


ď pTpQA1 q pv1 q ^ TpQA1 q pu1 q ^ pTpQA2 q pv1 q ^ TpQA2 q pu2 qq
“ pTpQA1 q pv1 q ^ TpQA2 q pv1 qq ^ pTpQA1 q pu1 q ^ TpQA2 q pu2 qq
“ TpQA1 eQA2 q pv1 , v1 q ^ TpQA1 eQA2 q pu1 , u2 q
TpSB1 eSB2 q ppv1 , v1 qpu1 , u2 qq ď TpQA1 eQA2 q pv1 , v1 q ^ TpQA1 eQA2 q pv, u2 q
Similarly, IpSB1 eSB2 q ppv1 , v1 qpu1 , u2 qq ď IpQA1 eQA2 q pv1 , v1 q _ IpQA1 eQA2 q pv, u2 q
FpSB1 eSB2 q ppv1 , v1 qpu1 , u2 qq ď FpQA1 eQA2 q pv1 , v1 q _ FpQA1 eQA2 q pv, u2 q.

Thus, SB1 e SB2 is a neutrosophic relation on QA1 e QA2 . Similarly, we can show that SB1 e SB2
is a neutrosophic relation on QA1 e QA2 . Hence, G is a neutrosophic soft rough graph.

3. Application
In this section, we apply the concept of NSRSs to a decision-making problem. In recent times,
the object recognition problem has gained considerable importance. The object recognition problem
can be considered as a decision-making problem, in which final identification of objects is founded on
a given set of information. A detailed description of the algorithm for the selection of most suitable
objects based on an available set of alternatives is given, and purposed decision-making method
can be used to calculate lower and upper approximation operators to progress deep concerns of the
problem. The presented algorithms can be applied to avoid lengthy calculations when dealing with
large number of objects. This method can be applied in various domains for multi-criteria selection
of objects.

Selection of Most Suitable Generic Version of Brand Name Medicine


In the pharmaceutical industry, different pharmaceutical companies develop, produce and
discover pharmaceutical medicine (drugs) for use as medication. These pharmaceutical companies
deals with “brand name medicine” and “generic medicine”. Brand name medicine and generic
medicine are bioequivalent, and have a generic medicine rate and element of absorption. Brand name
medicine and generic medicine have the same active ingredients, but the inactive ingredients may
differ. The most important difference is cost. Generic medicine is less expensive as compared to
brand name comparators. Usually, generic drug manufacturers face competition to produce cost less
products. The product may possibly be slightly dissimilar in color, shape, or markings. The major
difference is cost. We consider a brand name drug “u “ Loratadine” used for seasonal allergies
medication. Consider

V “ tu1 “ Triamcinolone, u2 “ Cetirizine/Pseudoephedrine,


u3 “ Pseudoephedrine, u4 “ loratadine/pseudoephedrine,
u5 “ Fluticasoneu

is a set of generic versions of “Loratadine”. We want to select the most suitable generic version of
Loratadine on the basis of parameters e1 “ Highly soluble, e2 “ Highly permeable, e3 “ Rapidly
dissolving. M “ te1 , e2 , e3 u be a set of paraments. Let Q be a neutrosophic soft relation from V to
parameter set M, and describe truth-membership, indeterminacy-membership and false-membership
degrees of generic version medicine corresponding to the parameters as shown in Table 11.

118
Axioms 2018, 7, 14

Table 11. Neutrosophic soft set pQ, Mq.

Q u1 u2 u3 u4 u5
e1 p0.4, 0.5, 0.6q p0.5, 0.3, 0.6q p0.7, 0.2, 0.3q p0.5, 0.7, 0.5q p0.6, 0.5, 0.4q
e2 p0.7, 0.3, 0.2q p0.3, 0.4, 0.3q p0.6, 0.5, 0.4q p0.8, 0.4, 0.6q p0.7, 0.8, 0.5q
e3 p0.6, 0.3, 0.4q p0.7, 0.2, 0.3q p0.7, 0.2, 0.4q p0.8, 0.7, 0.6q p0.7, 0.3, 0.5q

Suppose A “ tpe1 , 0.2, 0.4, 0.5q, pe2 , 0.5, 0.6, 0.4q, pe3 , 0.7, 0.5, 0.4qu is the most favorable object that
is an NS on the parameter set M under consideration. Then, pQpAq, QpAqq is an NSRS in NSAS
pV, M, Qq, where

QpAq “ tpu1 , 0.6, 0.5, 0.4q, pu2 , 0.7, 0.4, 0.4q, pu3 , 0.7, 0.4, 0.4q, pu4 , 0.7, 0.6, 0.5q, pu5 , 0.7, 0.5, 0.5qu,
QpAq “ tpu1 , 0.5, 0.6, 0.4q, pu2 , 0.5, 0.6, 0.5q, pu3 , 0.3, 0.3, 0.5q, pu4 , 0.5, 0.6, 0.5q, pu5 , 0.4, 0.5, 0.5qu.

Let E “ tu1 v2 , u1 u3 , u4 u1 , u2 u3 , u5 u3 , u2 u4 , u2 u5 u Ď V́ and L “ te1 e3 , e2 e1 , e3 e2 u Ď Ḿ.


Then, a neutrosophic soft relation S on E (from L to E) can be defined as follows in Table 12:

Table 12. Neutrosophic soft relation S.

S u1 u2 u1 u3 u4 u1 u2 u3 u5 u3 u2 u4 u2 u5
e1 e2 (0.3, 0.4 ,0.2) p0.4, 0.4, 0.5q p0.4, 0.4, 0.5q p0.6, 0.3, 0.4q p0.4, 0.2, 0.2q p0.4, 0.4, 0.2q p0.4, 0.3, 0.4q
e2 e3 (0.5 ,0.4 ,0.1) p0.4, 0.3, 0.2q p0.4, 0.3, 0.2q p0.3, 0.3, 0.2q p0.6, 0.2, 0.4q p0.3, 0.2, 0.1q p0.3, 0.3, 0.2q
e1 e3 (0.4,0.4,0.1) p0.4, 0.2, 0.2q p0.4, 0.2, 0.2q p0.5, 0.3, 0.3q p0.4, 0.2, 0.3q p0.4, 0.3, 0.1q p0.5, 0.3, 0.2q

Let B “ tpe1 e2 , 0.2, 0.4, 0.5q, pe2 e3 , 0.5, 0.4, 0.4q, pe1 e3 , 0.5, 0.2, 0.5qu be an NS on L that describes some
relationship between the parameters under consideration; then, SB “ pSB, SBq is an NSRR, where

SB “ tpu1 u2 , 0.5, 0.4, 0.4q, pu1 u3 , 0.4, 0.2, 0.4q, pu4 u1 , 0.4, 0.2, 0.4q, pu2 u3 , 0.5, 0.3, 0.4q,
pu5 u3 , 0.5, 0.2, 0.4q, pu2 u4 , 0.4, 0.3, 0.4q, pu2 u5 , 0.5, 0.3, 0.4qu,
SB “ tpu1 u2 , 0.2, 0.4, 0.4qpu1 u3 , 0.5, 0.4, 0.4q, pu4 u1 , 0.5, 0.4, 0.4q, pu2 u3 , 0.4, 0.4, 0.5q,
pu5 u3 , 0.2, 0.4, 0.4q, pu2 u4 , 0.2, 0.4, 0.4q, pu2 u5 , 0.4, 0.4, 0.5qu.

Thus, G “ pG, Gq is an NSRG as shown in Figure 12.

Figure 12. Neutrosophic soft rough graph G “ pG, Gq

In [3], the sum of two neutrosophic numbers is defined.

119
Axioms 2018, 7, 14

Definition 20. [3] Let C and D be two single valued neutrosophic numbers, and the sum of two single valued
neutrosophic number is defined as follows:

C ‘ D “ă TC ` TD ´ TC ˆ TD , IC ˆ ID , FC ˆ FD ą . (1)

Algorithm 1: Algorithm for selection of most suitable objects


1. Input the number of elements in vertex set V “ tu1 , u2 , . . . , un u.
2. Input the number of elements in parameter set M “ te1 , e2 , . . . , em u.
3. Input a neutrosophic soft relation Q from V to M.
4. Input a neutrosophic set A on M.
5. Compute neutrosophic soft rough vertex set QA “ pQA, QpAqq.
6. Input the number of elements in edge set E “ tu1 u1 , u1 u2 , . . . , uk u1 u.
7. Input the number of elements in parameter set Ḿ “ te1 e1 , e1 e2 , . . . , el e1 u.
8. Input a neutrosophic soft relation S from V́ to Ḿ.
9. Input a neutrosophic set B on Ḿ.
10. Compute neutrosophic soft rough edge set SB “ pSB, SpBqq.
11. Compute neutrosophic set α “ pTα pui q, Iα pui q, Fα pui qq, where

Tα pui q “ TQp Aq pui q ` TQp Aq pui q ´ TQp Aq pui q ˆ TQp Aq pui q,


Iα pui q “ TQp Aq pui q ˆ TQp Aq pui q,
Fα pui q “ FQp Aq pui q ˆ FQp Aq pui q.

12. Compute neutrosophic set β “ pTβ pui ui q, Iβ pui u j q, Fβ pui u j qq, where

Tβ pui u j q “ TSpBq pui u j q ` TSpBq pui u j q ´ TSpBq pui u j q ˆ TSpBq pui u j q,


Iβ pui u j q “ TSpBq pui u j q ˆ TSpBq pui u j q,
Fβ pui u j q “ FSpBq pui u j q ˆ FSpBq pui u j q.

13. Calculate the score values of each object ui , and the score function is defined as follows:

ÿ Tα pu j q ` Iα pu j q ´ Fα pu j q
S̃pui q “ .
3 ´ pTβ pui u j q ` Iβ pui u j q ´ Fβ pui u j qq
ui u j P E

n
14. The decision is Si if Si “ max S̃i .
i “1
15. If i has more than one value, then any one of Si may be chosen.

The sum of UNSRS QA and the LNSRS QA and sum of LNSRR SB and the UNSRR SB are NSs
QA ‘ QA and SB ‘ SB, respectively defined by

α “ QA ‘ QA “ tpu1 , 0.8, 0.3, 0.16q, pu2 , 0.85, 0.24, 0.2q, pu3 , 0.79, 0.2, 0.2q, pu4 , 0.85, 0.36, 0.25q,
pu5 , 0.82, 0.25, 0.25qu,
β “ SB ‘ SB “ tpu1 u2 , 0.6, 0.16, 0.16q, pu1 u3 , 0.7, 0.8, 0.16q, pu4 u1 , 0.7, 0.8, 0.16q, pu2 u3 , 0.7,
0.12, 0.2q, pu5 u3 , 0.6, 0.08, 0.16q, pu2 u4 , 0.52, 0.12, 0.16q, pu2 u5 , 0.7, 0.12, 0.2qu.

The score function S̃puk q defines for each generic version medicine ui P V,

ÿ Tα pu j q ` Iα pu j q ´ Fα pu j q
S̃pui q “ (2)
3 ´ pTβ pui u j q ` Iβ pui u j q ´ Fβ pui u j qq
ui u j P E

120
Axioms 2018, 7, 14

and uk with the larger score value uk “ max Spui q is the most suitable generic version medicine.
i
By calculations, we have

S̃pu1 q “ 0.88, S̃pu2 q “ 0.69, S̃pu3 q “ 0.26 S̃pu4 q “ 0.57, and S̃pu5 q “ 0.33. (3)

Here, u1 is the optimal decision, and the most suitable generic version of “Loratadine” is
“Triamcinolone”. We have used software MATLAB (version 7, MathWorks, Natick, MA, USA) for
calculating the required results in the application. The algorithm is given in Algorithm 1. The algorithm
of the program is general for any number of objects with respect to certain parameters.

4. Conclusions
Rough set theory can be considered as an extension of classical set theory. Rough set theory
is a very useful mathematical model to handle vagueness. NS theory, RS theory and SS theory are
three useful distinguished approaches to deal with vagueness. NS and RS models are used to handle
uncertainty, and combining these two models with another remarkable model of SSs gives more precise
results for decision-making problems. In this paper, we have presented the notion of NSRGs and
investigated some properties of NSRGs in detail. The notion of NSRGs can be utilized as a mathematical
tool to deal with imprecise and unspecified information. In addition, a decision-making method based
on NSRGs is proposed. This research work can be extended to (1) Rough bipolar neutrosophic
soft sets; (2) Bipolar neutrosophic soft rough sets, (3) Interval-valued bipolar neutrosophic rough sets,
and (4) Soft rough neutrosophic graphs.

Author Contributions: Muhammad Akram and Sundas Shahzadi conceived and designed the experiments;
Hafsa M. Malik performed the experiments; Florentin Smarandache analyzed the data; Sundas Shahzadi and
Hafsa M. Malik wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Smarandache, F. Neutrosophic set, a generalisation of the Intuitionistic Fuzzy Sets. Int. J. Pure Appl. Math.
2010, 24, 289–297.
2. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single-valued neutrosophic sets. Multispace Multistructure
2010, 4, 410–413.
3. Peng, J.J.; Wang, J.Q.; Zhang, H.Y.; Chen, X.H. An outranking approach for multi-criteria decision-making
problems with simplified neutrosophic sets. Appl. Soft Comput. 2014, 25, 336–346.
4. Molodtsov, D.A. Soft set theory-first results. Comput. Math. Appl. 1999, 37, 19–31.
5. Maji, P.K.; Biswas, R.; Roy, A.R. Fuzzy soft sets. J. Fuzzy Math. 2001, 9, 589–602.
6. Maji, P.K.; Biswas, R.; Roy, A.R. Intuitionistic fuzzy soft sets. J. Fuzzy Math. 2001, 9, 677–692.
7. Maji, P.K. Neutrosophic soft set. Ann. Fuzzy Math. Inform. 2013, 5, 157–168.
8. Sahin, R.; Kucuk, A. On similarity and entropy of neutrosophic soft sets. J. Intell. Fuzzy Syst. Appl.
Eng. Technol. 2014, 27, 2417–2430.
9. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356.
10. Feng, F.; Liu, X.; Leoreanu-Fotea, B.; Jun, Y.B. Soft sets and soft rough sets. Inf. Sci. 2011, 181, 1125–1137.
11. Meng, D.; Zhang, X.; Qin, K. Soft rough fuzzy sets and soft fuzzy rough sets. Comput. Math. Appl. 2011, 62,
4635–4645.
12. Sun, B.Z.; Ma, W. Soft fuzzy rough sets and its application in decision making. Artif. Intell. Rev. 2014, 41,
67–80.
13. Sun, B.Z.; Ma, W.; Liu, Q. An approach to decision making based on intuitionistic fuzzy rough sets over two
universes. J. Oper. Res. Soc. 2013, 64, 1079–1089.
14. Zhang, X.; Zhou, B.; Li, Q. A general frame for intuitionistic fuzzy rough sets. Inf. Sci. 2012, 216, 34–49.
15. Zhang, X.; Dai, J.; Yu, Y. On the union and intersection operations of rough sets based on various
approximation spaces. Inf. Sci. 2015, 292, 214–229.

121
Axioms 2018, 7, 14

16. Zhang, H.; Shu, L.; Liao, S. Intuitionistic fuzzy soft rough set and its application in decision making.
Abstr. Appl. Anal. 2014, 2014, 287314.
17. Zhang, H.; Xiong, L.; Ma, W. Generalized intuitionistic fuzzy soft rough set and its application in decision
making. J. Comput. Anal. Appl. 2016, 20, 750–766.
18. Broumi, S.; Smarandache, F. Interval-valued neutrosophic soft rough sets. Int. J. Comput. Math. 2015,
2015, 232919.
19. Broumi, S.; Smarandache, F.; Dhar, M. Rough neutrosophic sets. Neutrosophic Sets Syst. 2014, 3, 62–67.
20. Yang, H.L.; Zhang, C.L.; Guo, Z.L.; Liu, Y.L.; Liao, X. A hybrid model of single valued neutrosophic sets and
rough sets: Single valued neutrosophic rough set model. Soft Comput. 2016, doi:10.1007/s00500-016-2356-y .
21. Akram, M.; Nawaz, S. Operations on soft graphs. Fuzzy Inf. Eng. 2015, 7, 423–449.
22. Akram, M.; Nawaz, S. On fuzzy soft graphs. Ital. J. Pure Appl. Math. 2015, 34, 497–514.
23. Akram, M.; Shahzadi, S. Novel intuitionistic fuzzy soft multiple-attribute decision-making methods.
Neural Comput. Appl. 2016, doi:10.1007/s00521-016-2543-x 1–13.
24. Shahzadi, S.; Akram, M. Intuitionistic fuzzy soft graphs with applications. J. Appl. Math. Comput. 2016, 55,
369–392.
25. Akram, M.; Shahzadi, S. Neutrosophic soft graphs with application. J. Intell. Fuzzy Syst. 2017, 2, 841–858.
26. Zafar, F.; Akram, M. A novel decision-making method based on rough fuzzy information. Int. J. Fuzzy Syst.
2017, 1–15, doi:10.1007/s40815-017-0368-0.
27. Peng, T.Z.; Qiang, W.J.; Yu, Z.H. Hybrid single-valued neutrosophic MCGDM with QFD for market segment
evaluation and selection. J. Intell. Fuzzy Syst. 2018, 34, 177–187.
28. Qiang, W.J.; Xu, Z.; Yu, Z.H. Hotel recommendation approach based on the online consumer reviews using
interval neutrosophic linguistic numbers. J. Intell. Fuzzy Syst. 2018, 34, 381–394.
29. Luo, S.Z.; Cheng, P.F.; Wang, J.Q.; Huang, Y.J. Selecting Project Delivery Systems Based on Simplified
Neutrosophic Linguistic Preference Relations. Symmetry 2017, 9, 151, doi:10.3390/sym9070151.
30. Nie, R.X.; Wang, J.Q.; Zhang, H.Y. Solving solar-wind power station location problem using an extended
WASPAS technique with Interval neutrosophic sets. Symmetry 2017, 9, 106, doi:10.3390/sym9070106.
31. Wu, X.; Wang, J.; Peng, J.; Qian, J. A novel group decision-making method with probability hesitant interval
neutrosphic set and its application in middle level manager’s selection. Int. J. Uncertain. Quant. 2018,
doi:10.1615/Int.J.UncertaintyQuantification.2017020671.
32. Medina, J.; Ojeda-Aciego, M. Multi-adjoint t-concept lattices. Inf. Sci. 2010, 180, 712–725.
33. Pozna, C.; Minculete, N.; Precup, R.E.; Kóczy, L.T.; Ballagi, Á. Signatures: Definitions, operators and
applications to fuzzy modeling. Fuzzy Sets Syst. 2012, 201, 86–104.
34. Nowaková, J.; Prílepok, M.; Snášel, V. Medical image retrieval using vector quantization and fuzzy S-tree.
J. Med. Syst. 2017, 41, 1–16.
35. Kumar, A.; Kumar, D.; Jarial, S.K. A hybrid clustering method based on improved artificial bee colony and
fuzzy C-Means algorithm. Int. J. Artif. Intell. 2017, 15, 40–60.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

122
axioms
Article
Neutrosophic Number Nonlinear Programming
Problems and Their General Solution Methods under
Neutrosophic Number Environments
Jun Ye *, Wenhua Cui and Zhikang Lu
Department of Electrical and Information Engineering, Shaoxing University, 508 Huancheng West Road,
Shaoxing 312000, China; [email protected] (W.C.); [email protected] (Z.L.)
* Correspondence: [email protected] or [email protected]; Tel.: +86-575-8832-7323

Received: 22 January 2018; Accepted: 22 February 2018; Published: 24 February 2018

Abstract: In practical situations, we often have to handle programming problems involving


indeterminate information. Building on the concepts of indeterminacy I and neutrosophic number
(NN) (z = p + qI for p, q ∈ R), this paper introduces some basic operations of NNs and concepts of
NN nonlinear functions and inequalities. These functions and/or inequalities contain indeterminacy
I and naturally lead to a formulation of NN nonlinear programming (NN-NP). These techniques
include NN nonlinear optimization models for unconstrained and constrained problems and their
general solution methods. Additionally, numerical examples are provided to show the effectiveness
of the proposed NN-NP methods. It is obvious that the NN-NP problems usually yield NN
optimal solutions, but not always. The possible optimal ranges of the decision variables and NN
objective function are indicated when the indeterminacy I is considered for possible interval ranges
in real situations.

Keywords: neutrosophic number; neutrosophic number function; neutrosophic number nonlinear


programming; neutrosophic number optimal solution

1. Introduction
Traditional mathematical programming usually handles optimization problems involving
deterministic objective functions and/or constrained functions. However, uncertainty also exists
in real problems. Hence, many researchers have proposed uncertain optimization methods, such as
approaches using fuzzy and stochastic logics, interval numbers, or uncertain variables [1–6].
Uncertain programming has been widely applied in engineering, management, and design problems.
In existing uncertain programming methods, however, the objective functions or constrained functions
are usually transformed into a deterministic or crisp programming problem to yield the optimal feasible
crisp solution of the decision variables and the optimal crisp value of the objective function. Hence,
existing uncertain linear or nonlinear programming methods are not really meaningful indeterminate
methods because they only obtain optimal crisp solutions rather than indeterminate solutions necessary
for real situations. However, indeterminate programming problems may also yield an indeterminate
optimal solution for the decision variables and the indeterminate optimal value of the objective function
suitable for real problems with indeterminate environments. Hence, it is necessary to understand how
to handle indeterminate programming problems with indeterminate solutions.
Since there exists indeterminacy in the real world, Smarandache [7–9] first introduced a concept
of indeterminacy—which is denoted by I, the imaginary value—and then he presented a neutrosophic
number (NN) z = p + qI for p, q ∈ R (R is all real numbers) by combining the determinate part p with the
indeterminate part qI. It is obvious that this is a useful mathematical concept for describing incomplete

Axioms 2018, 7, 13; doi:10.3390/axioms7010013 123 www.mdpi.com/journal/axioms


Axioms 2018, 7, 13

and indeterminate information. After their introduction, NNs were applied to decision-making [10,11]
and fault diagnosis [12,13] under indeterminate environments.
In 2015, Smarandache [14] introduced a neutrosophic function (i.e., interval function or thick
function), neutrosophic precalculus, and neutrosophic calculus to handle more indeterminate problems.
He defined a neutrosophic thick function g: R → G(R) (G(R) is the set of all interval functions) as the
form of an interval function g(x) = [g1 (x), g2 (x)]. After that, Ye et al. [15] introduced the neutrosophic
functions in expressions for the joint roughness coefficient and the shear strength in the mechanics of
rocks. Further, Ye [16] and Chen et al. [17,18] presented expressions and analyses of the joint roughness
coefficient using NNs. Ye [19] proposed the use of neutrosophic linear equations and their solution
methods in traffic flow problems with NN information.
Recently, NNs have been extended to linguistic expressions. For instance, Ye [20] proposed
neutrosophic linguistic numbers and their aggregation operators for multiple attribute group
decision-making. Further, Ye [21] presented hesitant neutrosophic linguistic numbers—based on
both the neutrosophic linguistic numbers and the concept of hesitant fuzzy logic—calculated their
expected value and similarity measure, and applied them to multiple attribute decision-making.
Additionally, Fang and Ye [22] introduced linguistic NNs based on both the neutrosophic linguistic
number and the neutrosophic set concept, and some aggregation operators of linguistic NNs for
multiple attribute group decision-making.
In practical problems, the information obtained by decision makers or experts may be
imprecise, uncertain, and indeterminate because of a lack of data, time pressures, measurement
errors, or the decision makers’ limited attention and knowledge. In these cases, we often have
to solve programming problems involving indeterminate information (indeterminacy I). However,
the neutrosophic functions introduced in [14,15] do not contain information about the indeterminacy
I and also cannot express functions involving indeterminacy I. Thus, it is important to define NN
functions containing indeterminacy I based on the concept of NNs, in order to handle programming
problems under indeterminate environments. Jiang and Ye [23] and Ye [24] proposed NN linear
and nonlinear programming models and their preliminary solution methods, but they only handled
some simple/specified NN optimization problems and did not propose effective solution methods for
complex NN optimization problems. To overcome this insufficiency, this paper first introduces some
operations of NNs and concepts of NN linear and nonlinear functions and inequalities, which contain
indeterminacy I. Then, various NN nonlinear programming (NN-NP) models and their general solution
methods are proposed in order to obtain NN/indeterminate optimal solutions.
The rest of this paper is structured as follows. On the basis of some basic concept of NNs,
Section 2 introduces some basic operations of NNs and concepts of NN linear and nonlinear functions
and inequalities with indeterminacy I. Section 3 presents NN-NP problems, including NN nonlinear
optimization models with unconstrained and constrained problems. In Section 4, general solution
methods are introduced for various NN-NP problems, and then numerical examples are provided to
illustrate the effectiveness of the proposed NN-NP methods. Section 5 contains some conclusions and
future research.

2. Neutrosophic Numbers and Neutrosophic Number Functions


Smarandache [7–9] first introduced an NN, denoted by z = p + qI for p, q ∈ R, consisting of a
determinate part p and an indeterminate part qI, where I is the indeterminacy. Clearly, it can express
determinate information and indeterminate information as in real world situations. For example,
consider the NN z = 5 + 3I for I ∈ [0, 0.3], which is equivalent to z ∈ [5, 5.9]. This indicates that
the determinate part of z is 5, the indeterminate part is 3I, and the interval of possible values for the
number z is [5, 5.9]. If I ∈ [0.1, 0.2] is considered as a possible interval range of indeterminacy I, then the
possible value of z is within the interval [5.3, 5.6]. For another example, the fraction 7/15 is within
the interval [0.46, 0.47], which is represented as the neutrosophic number z = 0.46 + 0.01I for I ∈ [0, 1].

124
Axioms 2018, 7, 13

The NN z indicates that the determinate value is 0.46, the indeterminate value is 0.01I, and the possible
value is within the interval [0.46, 0.47].
It is obvious that an NN z = p + qI may be considered as the possible interval range (changeable
interval number) z = [p + q·inf{I}, p + q·sup{I}] for p, q ∈ R and I ∈ [inf{I}, sup{I}]. For convenience, z is
denoted by z = [p + qIL , p + qIU ] for z ∈ Z (Z is the set of all NNs) and I ∈ [IL , IU ] for short. In special
cases, z can be expressed as the determinate part z = p if qI = 0 for the best case, and, also, z can be
expressed as the indeterminate part z = qI if p = 0 for the worst case.
Let two NNs be z1 = p1 + q1 I and z2 = p2 + q2 I for z1 , z2 ∈ Z, then their basic operational laws for
I ∈ [IL , IU ] are defined as follows [23,24]:

(1) z1 + z2 = p1 + p2 + ( q1 + q2 ) I = [ p1 + p2 + q1 I L + q2 I L , p1 + p2 + q1 I U + q2 I U ];
(2) z1 − z2 = p1 − p2 + ( q1 − q2 ) I = [ p1 − p2 + q1 I L − q2 I L , p1 − p2 + q1 I U − q2 I U ];
z1 × z2 =⎡
p1 p2 +( p1 q2 + p2 q1 ) I + q1 q2 I 2  ⎤
( p1 + q1 I L )( p2 + q2 I L ), ( p1 + q1 I L )( p2 + q2 I U ),
⎢ min , ⎥
(3) ⎢ ( p + q1 I U )( p2 + q2 I L ), ( p1 + q1 I U )( p2 + q2 I U ) ⎥
=⎢  1  ⎥ ;
⎢ ( p1 + q1 I L )( p2 + q2 I L ), ( p1 + q1 I L )( p2 + q2 I U ), ⎥
⎣ ⎦
max
( p1 + q1 I )( p2 + q2 I ), ( p1 + q1 I )( p2 + q2 I )
U L U U

z1 p +q I [ p +q I L ,p +q I U ]
z2 =p2 +q2 I = [ p2 +q2 I L ,p2 +q2 I U ]
1 1 1 1 1 1
⎡   ⎤
p1 + q1 I L p1 + q1 I L p1 + q1 I U p1 + q1 I U
(4) min p +q I U , p +q I L , p +q I U , p +q I L , .
=⎣  2 2 L 2 2 L 2 2 U 2 2 U ⎦
p1 + q1 I p1 + q1 I p1 + q1 I p1 + q1 I
max p +q I U , p +q I L , p +q I U , p +q I L
2 2 2 2 2 2 2 2

For a function containing indeterminacy I, we can define an NN function (indeterminate function)


in n variables (unknowns) as F(x, I): Zn → Z for x = [x1 , x2 , . . . , xn ]T ∈ Zn and I ∈ [IL , IU ], which is
either an NN linear or an NN nonlinear function. For example, F1 (x, I ) = x1 − Ix2 + 1 + 2I for
x = [x1 , x2 ]T ∈ Z2 and I ∈ [IL , IU ] is an NN linear function, and F2 (x) = x12 + x22 − 2Ix1 − Ix2 + 3I for
x = [x1 , x2 ]T ∈ Z2 and I ∈ [IL , IU ] is an NN nonlinear function.
For an NN function in n variables (unknowns) g(x, I): Zn → Z, we can define an NN inequality
g(x, I) ≤ (≥) 0 for x = [x1 , x2 , . . . , xn ]T ∈ Zn and I ∈ [IL , IU ], where g(x, I) is either an NN linear
function or an NN nonlinear function. For example, g1 (x, I ) = 2x1 − Ix2 + 4 + 3I ≤ 0 and g2 (x, I ) =
2x12 − x22 + 2 + 5I ≤ 0 for x = [x1 , x2 ]T ∈ Z2 and I ∈ [IL , IU ] are NN linear and NN nonlinear inequalities
in two variables, respectively.
Generally, the values of x, F(x, I), and g(x, I) are NNs (usually but not always). In this study,
we mainly research on NN-NP problems and their general solution methods.

3. Neutrosophic Number Nonlinear Programming Problems


An NN-NP problem is similar to a traditional nonlinear programming problem, which is
composed of an objective function, general constraints, and decision variables. The difference is
that an NN-NP problem includes at least one NN nonlinear function, which could be the objective
function, or some or all of the constraints. In the real world, many real problems are inherently
nonlinear and indeterminate. Hence, various NN optimization models need to be established to
handle different NN-NP problems.
In general, NN-NP problems in n decision variables can be expressed by the following NN
mathematical models:

(1) Unconstrained NN optimization model:

min F(x, I), x ∈ Zn , (1)

where x = [x1 , x2 , . . . , xn ]T ∈ Zn , F(x, I): Zn → Z, and I ∈ [IL , IU ].

125
Axioms 2018, 7, 13

(2) Constrained NN optimization model:

min F(x, I)
s.t. gi (x, I) ≤ 0, I = 1, 2, . . . , m
(2)
hj (x, I) = 0, j = 1, 2, . . . , l
x ∈ Zn ,

where g1 (x, I), g2 (x, I), . . . , gm (x, I), h1 (x, I), h2 (x, I), . . . , hl (x, I): Zn → Z, and I ∈ [IL , IU ].

In special cases, if the NN-NP problem only contains the restrictions hj (x, I) = 0 without inequality
constraints, gi (x, I) ≤ 0, then the NN-NP problem is called the NN-NP problem with equality
constraints. If the NN-NP problem only contains the restrictions gi (x, I) ≤ 0, without constraints
hj (x, I) = 0, then the NN-NP problem is called the NN-NP problem with inequality constraints. Finally,
if the NN-NP problem does not contain either restrictions, hj (x, I) = 0 or gi (x, I) ≤ 0, then the constrained
NN-NP problem is reduced to the unconstrained NN-NP problem.
The NN optimal solution for the decision variables is feasible in an NN-NP problem if it satisfies
all of the constraints. Usually, the optimal solution for the decision variables and the value of the NN
objective function are NNs, but not always). When the indeterminacy I is considered as a possible
interval range (possible interval number), the optimal solution of all feasible intervals forms the feasible
region or feasible set for x and I ∈ [IL , IU ]. In this case, the value of the NN objective function is an
optimal possible interval (NN) for F(x, I).
In the following section, we shall introduce general solution methods for NN-NP problems,
including unconstrained NN and constrained NN nonlinear optimizations, based on methods of
traditional nonlinear programming problems.

4. General Solution Methods for NN-NP Problems

4.1. One-Dimension Unconstrained NN Nonlinear Optimization


The simplest NN nonlinear optimization only has a nonlinear NN objective function with one
variable and no constraints. Let us consider a single variable NN nonlinear objective function F(x, I)
for x ∈ Z and I ∈ [IL , IU ]. Then, for a differentiable NN nonlinear objective function F(x, I), a local
optimal solution x* satisfies the following two conditions:

(1) Necessary condition: The derivative is dF(x* , I)/dx = 0 for I ∈ [IL , IU ];


(2) Sufficient condition: If the second derivative is d2 F(x* , I)/dx2 < 0 for I ∈ [IL , IU ], then x* is an
optimal solution for the maximum F(x* , I); if the second derivative is d2 F(x* , I)/dx2 > 0, then x* is
an optimal solution for the minimum F(x* , I).

Example 1. An NN nonlinear objective function with one variable is F(x, I) = 2Ix2 + 5I for x ∈ Z and
I ∈ [IL , IU ]. Based on the optimal conditions, we can obtain:

dF ( x, I )
= 4Ix = 0 ⇒ x ∗ = 0,
dx

d2 F ( x, I )
| x∗ =0 = 4I.
dx2
Assume that we consider a specific possible range of I ∈ [IL , IU ] according to real situations or actual
requirements, then we can discuss its optimal possible value. If I ∈ [1, 2] is considered as a possible interval
range, then d2 F(x* , I)/dx2 > 0, and x* = 0 is the optimal solution for the minimum F(x* , I). Thus, the minimum
value of the NN objective function is F(x* , I) = [5, 10], which, in this case, is a possible interval range, but not
always. Specifically if I = 1 (crisp value), then F(x* , I) = 5.

126
Axioms 2018, 7, 13

4.2. Multi-Dimension Unconstrained NN Nonlinear Optimization


Assume that a multiple variable NN function F(x, I) for x = [x1 , x2 , . . . , xn ]T ∈ Zn and I ∈ [IL , IU ]
is considered as an unconstrained differentiable NN nonlinear objective function in n variables. Then,
we can obtain the partial derivatives:
!T
∂F (x, I ) ∂F (x, I ) ∂F (x, I )
∇ F (x, I ) = , ,..., = 0 ⇒ x = x∗ .
∂x1 ∂x2 ∂xn

Further, the partial second derivatives, structured as the Hessian matrix H(x, I), are:
⎡ ∂2 F (x,I ) ∂2 F (x,I ) ∂2 F (x,I )

∂x12
, ∂x ∂x , . . . , ∂x1 ∂xn
⎢ 1 2

⎢ ∂2 F (x,I ) ∂2 F (x,I ) ∂2 F (x,I ) ⎥
⎢ ∂x2 ∂x1 , ,..., ∂x2 ∂xn ⎥
H (x, I ) = ⎢

∂x22 ⎥
⎥ .
⎢ .. .. .. .. ⎥
⎣ . . . . ⎦
∂2 F (x,I ) ∂2 F (x,I ) ∂2 F (x,I )
∂xn ∂x1 , ∂xn ∂x2 , . . . , ∂xn2 x=x∗

Then, the Hessian matrix H(x, I) is structured as its subsets Hi (x, I) (i = 1, 2, . . . , n), where Hi (x, I)
indicate the subset created by taking the first i rows and columns of H(x, I). You calculate the
determinant of each of the n subsets at x* :
" " " 2 ∗ "
" ∂ 2 F (x∗ , I ) " " ∂ F(x ,I ) ∂2 F(x∗ ,I ) "
" " " ∂x 2 ∂x ∂x "
H1 (x∗ , I ) = " ", H2 (x∗ , I ) = "" ∂2 F(x1∗ ,I ) ∂2 F(x∗ ,I ) "", · · ·
1 2
" ∂x12 " " "
∂x2 ∂x1 ∂x22

from the sign patterns of the determinates of Hi (x* , I) (i = 1, 2, . . . , n) for I ∈ [IL , IU ], as follows:

(1) If Hi (x* , I) > 0, then H(x* , I) is positive definite at x* ;


(2) If Hi (x* , I) < 0 and the remaining Hi (x* , I) alternate in sign, then H(x* , I) is negative definite at x* ;
(3) If some of the values which are supposed to be nonzero turn out to be zero, then H(x* , I) can be
positive semi-definite or negative semi-definite.

A local optimal value of x* in neutrosophic nonlinear objective function F(x* , I) for I ∈ [IL , IU ] can
be determined by the following categories:

(1) x* is a local maximum if ∇F(x* , I) = 0 and H(x* , I) is negative definite;


(2) x* is a local minimum if ∇F(x* , I) = 0 and H(x* , I) is positive definite;
(3) x* is a saddle point if ∇F(x* , I) = 0 and H(x* , I) is neither positive semi-definite nor
negative semi-definite.

Example 2. Consider an unconstrained NN nonlinear objective function with two variables x1 and x2 is
F (x, I ) = x12 + x22 − 4Ix1 − 2Ix2 + 5 for x ∈ Z2 and I ∈ [IL , IU ]. According to optimal conditions, we first
obtain the following derivative and the optimal solution:
# ∂F (x,I )
$ # $ # $ # $
2x1 − 4I x1∗ 2I
∇ F (x, I ) = ∂x1
= = 0 ⇒ x∗ = = .
∂F (x,I ) 2x2 − 2I x2∗ I
∂x2

Then, the NN Hessian matrix is given as follows:


⎡ ∂2 F (x∗ ,I ) ∂2 F (x∗ ,I )
⎤ # $
∂x12 ∂x1 ∂x2 2 0
H (x∗ , I ) = ⎣ ∂2 F (x∗ ,I ) ∂2 F (x∗ ,I )
⎦= .
0 2
∂x2 ∂x1 ∂x22

127
Axioms 2018, 7, 13

" "
" 2 "
" 0 "
Thus, | H1 (x∗ , I )| = 2 > 0 and | H (x∗ , I )| = " " = 4 > 0. Hence, the NN optimal
" 0 2 "
solution is x* = [2I, I]T and the minimum value of the NN objective function is F(x* , I) = 5(1 − I2 ) in this
optimization problem.
If the indeterminacy I ∈ [0, 1] is considered as a possible interval range, then the optimal solution of
x is x1 * = [0, 2] and x2 * = [0, 1] and the minimum value of the NN objective function is F(x* , I) = [0, 5].
Specifically, when I = 1 is a determinate value, then x1 * = 2, x2 * = 1, and F(x* , I) = 0. In this case, the NN
nonlinear optimization is reduced to the traditional nonlinear optimization, which is a special case of the NN
nonlinear optimization.

4.3. NN-NP Problem Having Equality Constraints


Consider an NN-NP problem having NN equality constraints:

min F(x, I)
s.t. hj (x, I) = 0, j = 1, 2, . . . , l (3)
x ∈ Zn

where h1 (x, I), h2 (x, I), . . . , hl (x, I): Zn → Z and I ∈ [IL , IU ].


Here we consider Lagrange multipliers for the NN-NP problem. The Lagrangian function that
we minimize is then given by:

l
L(x, I, λ) = F (x, I ) + ∑ λ j h j (x, I ), λ ∈ Zl , x ∈ Zn , (4)
j =1

where λj (j = 1, 2, . . . , l) is a Lagrange multiplier and I ∈ [IL , IU ]. It is obvious that this method


transforms the constrained optimization into unconstrained optimization. Then, the necessary
condition for this case to have a minimum is that:

∂L(x, I, λ)
= 0, i = 1, 2, . . . , n,
∂xi

∂L(x, I, λ)
= 0, j = 1, 2, . . . , l.
∂λ j

By solving n + l equations above, we can obtain the optimum solution x* = [x1 * , x2 * , . . . , xn * ]T and
the optimum multiplier values λj * (j = 1, 2, . . . , l).

Example 3. Let us consider an NN-NP problem having an NN equality constraint:

minF (x, I ) = 4Ix1 + 5x2

s.t. h(x, I ) = 2x1 + 3x2 − 6I = 0, x ∈ Z2 .

Then, we can construct the Lagrangian function:

L(x, I, λ) = 4Ix1 + 5x2 + λ(2x1 + 3x2 − 6I ), λ ∈ Z, x ∈ Z2 .

The necessary condition for the optimal solution yields the following:

∂L(x, I, λ) ∂L(x, I, λ) ∂L(x, I, λ)


= 8Ix1 + 2λ = 0, = 10x2 + 3λ = 0, and = 2x1 + 3x2 − 6I = 0.
∂x1 ∂x2 ∂λ

128
Axioms 2018, 7, 13

By solving these equations, we obtain the results x1 = −λ/(4I), x2 = −3λ/10, and λ = −12I2 /(1 + 1.8I).
Hence, the NN optimal solution is obtained by the results of x1 * = 3I/(1 + 1.8I) and x2 * = 18I2 /(5 + 9I). If the
indeterminacy I ∈ [1, 2] is considered as a possible interval range, then the optimal solution is x1 * = [0.6522,
4.2857] and x2 * = [0.7826, 5.1429]. Specifically, if I = 1 (crisp value), then the optimal solution is x1 * = 1.0714
and x2 * = 1.2857, which are reduced to the crisp optimal solution in classical optimization problems.

4.4. General Constrained NN-NP Problems


Now, we consider a general constrained NN-NP problem:

min F(x, I)
s.t. gk (x, I) ≤ 0, k = 1, 2, . . . , m
(5)
hj (x, I) = 0, j = 1, 2, . . . , l
x ∈ Zn

where g1 (x, I), g2 (x, I), . . . , gm (x, I), h1 (x, I), h2 (x, I), . . . , hl (x, I): Zn → Z for I ∈ [IL , IU ]. Then, we can
consider the NN Lagrangian function for the NN-NP problem:

m l
L(x, I, μ, λ) = F (x, I ) + ∑ μk gk (x, I ) + ∑ λ j h j (x, I ), μ ∈ Zm , λ ∈ Zl , x ∈ Zn . (6)
k =1 j =1

The usual NN Karush–Kuhn–Tucker (KKT) necessary conditions yield:

m l
∇ F (x, I ) + ∑ {μk ∇ gk (x, I )} + ∑ {λ j ∇h j (x, I )} = 0 (7)
k =1 j =1

combined with the original constraints, complementary slackness for the inequality constraints,
and μk ≥ 0 for k = 1, 2, . . . , m.

Example 4. Let us consider an NN-NP problem with one NN inequality constraint:

minF (x, I ) = Ix12 + 2x22

s.t. g(x, I ) = I − x1 − x2 ≤ 0, x ∈ Z2 .

Then, the NN Lagrangian function is constructed as:

L(x, I, μ) = Ix12 + 2x22 + μ( I − x1 − x2 ), μ ∈ Z, x ∈ Z2 .

The usual NN KKT necessary conditions yield:

∂L(x, I, μ) ∂L(x, I, μ)
= 2Ix1 − μ = 0, = 4x2 − μ = 0, and μ( I − x1 − x2 ) = 0.
∂x1 ∂x2

By solving these equations, we can obtain the results of x1 = μ/(2I), x2 = μ/4, and μ = 4I2 /(2 + I) (μ = 0
yields an infeasible solution for I > 0). Hence, the NN optimal solution is obtained by the results of x1 * = 2I/(2 + I)
and x2 * = I2 /(2 + I).
If the indeterminacy I ∈ [1, 2] is considered as a possible interval range corresponding to some specific
actual requirement, then the optimal solution is x1 * = [0.5, 1.3333] and x2 * = [0.25, 1.3333]. As another case,
if the indeterminacy I ∈ [2, 3] is considered as a possible interval range corresponding to some specific actual
requirement, then the optimal solution is x1 * = [0.8, 1.5] and x2 * = [0.8, 2.25]. Specifically, if I = 2 (a crisp
value), then the optimal solution is x1 * = 1 and x2 * = 1, which is reduced to the crisp optimal solution of the
crisp/classical optimization problem.

129
Axioms 2018, 7, 13

Compared with existing uncertain optimization methods [1–6], the proposed NN-NP methods
can obtain ranges of optimal solutions (usually NN solutions but not always) rather than the crisp
optimal solutions of previous uncertain optimization methods [1–6], which are not really meaningful
in indeterminate programming of indeterminate solutions in real situations [23,24]. The existing
uncertain optimization solutions are the special cases of the proposed NN-NP optimization solutions.
Furthermore, the existing uncertain optimization methods in [1–6] cannot express and solve the NN-NP
problems from this study. Obviously, the optimal solutions in the NN-NP problems are intervals
corresponding to different specific ranges of the indeterminacy I ∈ [IL , IU ] and show the flexibility
and rationality under indeterminate/NN environments, which is the main advantage of the proposed
NN-NP methods.

5. Conclusions
On the basis of the concepts of indeterminacy I and NNs, this paper introduced some basic
operations of NNs and concepts of both NN linear and nonlinear functions and inequalities,
which involve indeterminacy I. Then, we proposed NN-NP problems with unconstrained and
constrained NN nonlinear optimizations and their general solution methods for various optimization
models. Numerical examples were provided to illustrate the effectiveness of the proposed NN-NP
methods. The main advantages are that: (1) some existing optimization methods like the Lagrange
multiplier method and the KKT condition can be employed for NN-NP problems, (2) the indeterminate
(NN) programming problems can show indeterminate (NN) optimal solutions which can indicate
possible optimal ranges of the decision variables and NN objective function when indeterminacy
I ∈ [IL , IU ] is considered as a possible interval range for real situations and actual requirements,
and (3) NN-NP is the generalization of traditional nonlinear programming problems and is more
flexible and more suitable than the existing unconcerned nonlinear programming methods under
indeterminate environments. The proposed NN-NP methods provide a new effective way for avoiding
crisp solutions of existing unconcerned programming methods under indeterminate environments.
It is obvious that the NN-NP methods proposed in this paper not only are the generalization
of existing certain or uncertain nonlinear programming methods but also can deal with determinate
and/or indeterminate mathematical programming problems. In the future, we shall apply these
NN-NP methods to engineering fields, such as engineering design and engineering management.

Acknowledgments: This paper was supported by the National Natural Science Foundation of China
(Nos. 71471172, 61703280).
Author Contributions: Jun Ye proposed the neutrosophic number nonlinear programming methods and
Wenhua Cui and Zhikang Lu gave examples, calculations, and comparative analysis. All the authors wrote
the paper.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Jiang, C.; Long, X.Y.; Han, X.; Tao, Y.R.; Liu, J. Probability-interval hybrid reliability analysis for cracked
structures existing epistemic uncertainty. Eng. Fract. Mech. 2013, 112–113, 148–164. [CrossRef]
2. Zhang, B.; Peng, J. Uncertain programming model for uncertain optimal assignment problem.
Appl. Math. Model. 2013, 37, 6458–6468. [CrossRef]
3. Jiang, C.; Zhang, Z.G.; Zhang, Q.F.; Han, X.; Xie, H.C.; Liu, J. A new nonlinear interval programming method
for uncertain problems with dependent interval variables. Eur. J. Oper. Res. 2014, 238, 245–253. [CrossRef]
4. Liu, B.D.; Chen, X.W. Uncertain multiobjective programming and uncertain goal programming. J. Uncertain.
Anal. Appl. 2015, 3, 10. [CrossRef]
5. Veresnikov, G.S.; Pankova, L.A.; Pronina, V.A. Uncertain programming in preliminary design of technical
systems with uncertain parameters. Procedia Comput. Sci. 2017, 103, 36–43. [CrossRef]
6. Chen, L.; Peng, J.; Zhang, B. Uncertain goal programming models for bicriteria solid transportation problem.
Appl. Soft Comput. 2017, 51, 49–59. [CrossRef]

130
Axioms 2018, 7, 13

7. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic; American Research Press: Rehoboth,
MA, USA, 1998.
8. Smarandache, F. Introduction to Neutrosophic Measure, Neutrosophic Integral, and Neutrosophic Probability;
Sitech & Education Publisher: Columbus, OH, USA, 2013.
9. Smarandache, F. Introduction to Neutrosophic Statistics; Sitech & Education Publishing: Columbus, OH, USA, 2014.
10. Ye, J. Multiple-attribute group decision-making method under a neutrosophic number environment.
J. Intell. Syst. 2016, 25, 377–386. [CrossRef]
11. Ye, J. Bidirectional projection method for multiple attribute group decision making with neutrosophic
numbers. Neural Comput. Appl. 2017, 28, 1021–1029. [CrossRef]
12. Kong, L.W.; Wu, Y.F.; Ye, J. Misfire fault diagnosis method of gasoline engines using the cosine similarity
measure of neutrosophic numbers. Neutrosophic Sets Syst. 2015, 8, 43–46.
13. Ye, J. Fault diagnoses of steam turbine using the exponential similarity measure of neutrosophic numbers.
J. Intell. Fuzzy Syst. 2016, 30, 1927–1934. [CrossRef]
14. Smarandache, F. Neutrosophic Precalculus and Neutrosophic Calculus; EuropaNova: Brussels, Belgium, 2015.
15. Ye, J.; Yong, R.; Liang, Q.F.; Huang, M.; Du, S.G. Neutrosophic functions of the joint roughness coefficient
(JRC) and the shear strength: A case study from the pyroclastic rock mass in Shaoxing City, China.
Math. Prob. Eng. 2016, 2016, 4825709. [CrossRef]
16. Ye, J.; Chen, J.Q.; Yong, R.; Du, S.G. Expression and analysis of joint roughness coefficient using neutrosophic
number functions. Information 2017, 8, 69. [CrossRef]
17. Chen, J.Q.; Ye, J.; Du, S.G.; Yong, R. Expressions of rock joint roughness coefficient using neutrosophic
interval statistical numbers. Symmetry 2017, 9, 123. [CrossRef]
18. Chen, J.Q.; Ye, J.; Du, S.G. Scale effect and anisotropy analyzed for neutrosophic numbers of rock joint
roughness coefficient based on neutrosophic statistics. Symmetry 2017, 9, 208. [CrossRef]
19. Ye, J. Neutrosophic linear equations and application in traffic flow problems. Algorithms 2017, 10, 133.
[CrossRef]
20. Ye, J. Aggregation operators of neutrosophic linguistic numbers for multiple attribute group decision making.
SpringerPlus 2016, 5, 1691. [CrossRef] [PubMed]
21. Ye, J. Multiple attribute decision-making methods based on expected value and similarity measure of hesitant
neutrosophic linguistic numbers. Cogn. Comput. 2017. [CrossRef]
22. Fang, Z.B.; Ye, J. Multiple attribute group decision-making method based on linguistic neutrosophic numbers.
Symmetry 2017, 9, 111. [CrossRef]
23. Jiang, W.Z.; Ye, J. Optimal design of truss structures using a neutrosophic number optimization model under
an indeterminate environment. Neutrosophic Sets Syst. 2016, 14, 93–97.
24. Ye, J. Neutrosophic number linear programming method and its application under neutrosophic number
environments. Soft Comput. 2017. [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

131
axioms
Article
NN-Harmonic Mean Aggregation Operators-Based
MCGDM Strategy in a Neutrosophic
Number Environment
Kalyan Mondal 1 , Surapati Pramanik 2, *, Bibhas C. Giri 1 and Florentin Smarandache 3
1 Department of Mathematics, Jadavpur University, Kolkata-700032 West Bengal, India;
[email protected] (K.M.); [email protected] (B.C.G.)
2 Department of Mathematics, Nandalal Ghosh B.T. College, Panpur, PO-Narayanpur,
and District: North 24 Parganas, Pin-743126 West Bengal, India
3 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
* Correspondence: [email protected]; Tel.: +91-94-7703-5544 or +91-33-2560-1826; Fax: +91-33-2560-1826

Received: 18 November 2017; Accepted: 11 February 2018; Published: 23 February 2018

Abstract: A neutrosophic number (a + bI) is a significant mathematical tool to deal with indeterminate
and incomplete information which exists generally in real-world problems, where a and bI denote the
determinate component and indeterminate component, respectively. We define score functions and
accuracy functions for ranking neutrosophic numbers. We then define a cosine function to determine
the unknown weight of the criteria. We define the neutrosophic number harmonic mean operators
and prove their basic properties. Then, we develop two novel multi-criteria group decision-making
(MCGDM) strategies using the proposed aggregation operators. We solve a numerical example to
demonstrate the feasibility, applicability, and effectiveness of the two proposed strategies. Sensitivity
analysis with the variation of “I” on neutrosophic numbers is performed to demonstrate how the
preference ranking order of alternatives is sensitive to the change of “I”. The efficiency of the
developed strategies is ascertained by comparing the results obtained from the proposed strategies
with the results obtained from the existing strategies in the literature.

Keywords: neutrosophic number; neutrosophic number harmonic mean operator (NNHMO);


neutrosophic number weighted harmonic mean operator (NNWHMO); cosine function; score
function; multi-criteria group decision-making

1. Introduction
Multi-criteria decision-making (MCDM), and multi-criteria group decision-making (MCGDM)
are significant branches of decision theories which have been commonly applied in many scientific
fields. They have been developed in many directions, such as crisp environments [1,2], and uncertain
environments, namely fuzzy environments [3–13], intuitionistic fuzzy environments [14–24],
and neutrosophic set environments [25–45]. Smarandache [46,47] introduced another direction of
uncertainty by defining neutrosophic numbers (NN), which represent indeterminate and incomplete
information in a new way. A NN consists of a determinate component and an indeterminate component.
Thus, the NNs are more applicable to deal with indeterminate and incomplete information in real
world problems. The NN is expressed as the function N = p + qI in which p is the determinate
component and qI is the indeterminate component. If N = qI, i.e., the indeterminate part reaches the
maximum label, the worst situation occurs. If N = p, i.e., the indeterminate part does not appear, the
best situation occurs. Thus, the application of NNs is more appropriate to deal with the indeterminate
and incomplete information in real-world decision-making situations.

Axioms 2018, 7, 12; doi:10.3390/axioms7010012 132 www.mdpi.com/journal/axioms


Axioms 2018, 7, 12

Information aggregation is an essential practice of accumulating relevant information from various


sources. It is used to present aggregation between the min and max operators. The harmonic mean is
usually used as a mathematical tool to accumulate the central tendency of information [48].
The harmonic mean (HM) is widely used in statistics to calculate the central tendency of a set of
data. Park et al. [49] proposed multi-attribute group decision-making (MAGDM) strategy based on
HM operators under uncertain linguistic environments. Wei [50] proposed a MAGDM strategy based
on fuzzy-induced, ordered, weighted HM. In a fuzzy environment, Xu [48] studied a fuzzy-weighted
HM operator, fuzzy ordered weighted HM operator, and a fuzzy hybrid HM operator, and employed
them for MADM problems. Ye [51] proposed a multi-attribute decision-making (MADM) strategy
based on harmonic averaging projection for a simplified neutrosophic sets (SNS) environment.
In a NN environment, Ye [52] proposed a MAGDM using de-neutrosophication strategy and
a possibility degree ranking strategy for neutrosophic numbers. Liu and Liu [53] proposed a NN
generalized weighted power averaging operator for MAGDM. Zheng et al. [54] proposed a MAGDM
strategy based on a NN generalized hybrid weighted averaging operator. Pramanik et al. [55]
studied a teacher selection strategy based on projection and bidirectional projection measures in
a NN environment.
Only four [52–55] MCGDM strategies using NNs have been reported in the literature. Motivated
from the works of Ye [52], Liu and Liu [53], Zheng et al. [54], and Pramanik et al. [55], we consider the
proposed strategies to handle MCGDM problems in a NN environment.
The strategies [52–55] cannot deal with the situation when larger values other than arithmetic
mean, geometric mean, and harmonic mean are necessary for experimental purposes. To fill the
research gap, we propose two MCGDM strategies.
In this paper, we develop two new MCGDM strategies based on a NN harmonic mean operator
(NNHMO) and a NN weighted harmonic mean operator (NNWHMO) to solve MCGDM problems.
We define a cosine function to determine unknown weights of the criteria. To develop the proposed
strategies, we define score and accuracy functions for ranking NNs for the first time in the literature.
The rest of the paper is structured as follows: Section 2 presents some preliminaries of NNs and
score and accuracy functions of NNs. Section 3 devotes NN harmonic mean operator (NNHMO)
and NN weighted harmonic mean operator (NNWHMO). Section 4 defines the cosine function to
determine unknown criteria weights. Section 5 presents two novel decision-making strategies based
on NNHMO and NNWHMO. In Section 6, a numerical example is presented to illustrate the proposed
MCGDM strategies and the results show the feasibility of the proposed MCGDM strategies. Section 7
compares the obtained results derived from the proposed strategies and the existing strategies in NN
environment. Finally, Section 8 concludes the paper with some remarks and future scope of research.

2. Preliminaries
In this section, definition of harmonic and weighted harmonic mean of positive real numbers,
concepts of NNs, operations on NNs, score and accuracy functions of NNs are outlined.

2.1. Harmonic Mean and Weighted Harmonic Mean


Harmonic mean is a traditional average, which is generally used to determine central tendency of
data. The harmonic mean is commonly considered as a fusion method of numerical data.

Definition 1. [48]: The harmonic mean H of the positive real numbers x1 , x2 , . . . , xn is defined as:
H = 1 1n 1 = n
n
; i = 1, 2, . . . , n.
x1 + x2 +···+ xn ∑ x1
i =1 i

Definition 2. [49]: The weighted harmonic mean H of the positive real numbers x1 , x2 , . . . , xn is defined as
WH = w1 w2 1 wn = n w ; i = 1, 2, . . . , n.
1
x1 + x +···+ xn ∑ i
2 xi
i =1

133
Axioms 2018, 7, 12

n
Here, ∑ wi = 1.
i =1

2.2. NNs
A NN [46,47] consists of a determinate component x and an indeterminate component yI,
and is mathematically expressed as z = x + yI for x, y ∈ R, where I is indeterminacy interval and
R is the set of real numbers. A NN z can be specified as a possible interval number, denoted by
z = [x + yIL , x + yIU ] for z ∈ Z (Z is set of all NNs) and I ∈ [IL , IU ]. The interval I ∈ [IL , IU ] is considered
as an indeterminate interval.

• If yI = 0, then z is degenerated to the determinate component z = x


• If x = 0, then z is degenerated to the indeterminate component z = yI
• If IL = IU , then z is degenerated to a real number.

Let two NNs be z1 = x1 + y1 I and z2 = x2 + y2 I for z1 , z2 ∈ Z, and I ∈ [IL , IU ]. Some basic operational
rules for z1 and z2 are presented as follows:

(1) I2 = I
(2) I.0 = 0
(3) I/I = Undefined
(4) z1 + z2 = x1 + x2 + (y1 + y2 )I = [x1 + x2 + (y1 + y2 )IL , x1 + x2 + (y1 + y2 )IU ]
(5) z1 − z2 = x1 − x2 + (y1 − y2 )I = [x1 − x2 + (y1 − y2 )IL , x1 − x2 + (y1 − y2 )IU ]
(6) z1 × z2 = x1 x2 + (x1 y2 + x2 y1 )I + y1 y2 I2 = x1 x2 + (x1 y2 + x2 y1 + y1 y2 )I
z1 x 1 + y1 I x1 x 2 y1 − x 1 y2
(7) z2 = x 2 + y2 I = x 2 + x 2 ( x 2 + y2 )
I; x2 = 0, x2 = −y2
(8) 1
= 1+0.I
x 1 + y1 I = x 1
1
+ x (− y1
I; x1 = 0, x1 = −y1
z1 1 x 1 + y1 )

(9) =
z21 x1 + (2x1 y1
2 + y21 ) I
(10) λz1 = λx1 + λy1 I

Theorem 1. If z is a neutrosophic number then, 1


( z ) −1
= z, z = 0.

Proof. Let z = x + yI. Then,

1 1 −y
= ( z ) −1 = + I; x = 0, x = −y
z x x ( x + y)
y
1
= 1
+  x ( x +y)  I; x = 0 , x = −y
( z ) −1 1/x y
1/x 1/x − x( x+y)
= x + yI = z.


Definition 3. For any NN z = x + yI = [x + yIL , x + yIU ], (x and y not both zeroes), its score and accuracy
functions are defined, respectively, as follows:
" "
" x + y( I U − I L ) "
" "
Sc(z) = " % " (1)
" 2 x 2 + y2 "
& " "'
" "
Ac(z) = 1 − exp −"x + y( I U − I L " (2)

Theorem 2. Both score function Sc(z) and accuracy function Ac(z) are bounded.

134
Axioms 2018, 7, 12

Proof.
x, y ∈ R and I ∈ [0, 1]
( IU − I L )
⇒ 0 ≤ √ 2x 2 ≤ 1, 0 ≤ y√ ≤1
" x + y " x2 +y2" "
" x +y( I U − I L ) " " x +y( I U − I L ) "
⇒ 0 ≤ "" √ 2 2 "" ≤ 2 ⇒ 0 ≤ "" √ 2 2 "" ≤ 1 ⇒ 0 ≤ S(z) ≤ 1.
x +y 2 x +y
Since 0 ≤ Sc(z) ≤ 1, score function is bounded.
Again: " "

0 ≤ exp −" x + y( I U − I L " ≤ 1
 " "
⇒ −1 ≤ − exp −"" x + y( I U − I L"" ≤ 0
 
"
⇒ 0 ≤ 1 − exp − x + y( I − I U L " ≤1
Since 0 ≤ Ac(z) ≤ 1, accuracy function is bounded. 

Definition 4. Let two NNs be z1 = x1 + y1 I = [x1 + y1 IL , x1 + y1 IU ], and z2 = x2 + y2 I = [x2 + y2 IL , x2 + y2 IU ],


then the following comparative relations hold:

• If S(z1 ) > S(z2 ), then z1 > z2


• If S(z1 ) = S(z2 ) and A(z1 ) < A(z2 ), then z1 < z2
• If S(z1 ) = S(z2 ) and A(z1 ) = A(z2 ), then z1 = z2 .

Example 1. Let three NNs be z1 = 10 + 2I, z2 = 12 and z3 = 12 + 5I and I ∈ [0, 0.2]. Then,

S(z1 ) = 0.5099, S(z2 ) = 0.5, S(z3 ) = 0.5577, A(z1 ) = 0.999969, A(z2 ) = 0.999994, A(z3 ) = 0.999997.

We see that, S(z1 )  S(z2 ) = S(z3 ), and A(z3 )  S(z2 ).


Using Definition 2, we conclude that, z1  z3  z2 .

3. Harmonic Mean Operators for NNs


In this section, we define harmonic mean operator and weighted harmonic mean operator for
neutrosophic numbers.

3.1. NN-Harmonic Mean Operator (NNHMO)

Definition 5. Let zi = xi + yi I (i = 1, 2, . . . , n) be a collection of NNs. Then the NNHMO is defined as follows:


 
n
NNHMO(z1 , z2 , · · · , zn ) = n ∑ ( z i ) −1 −1
(3)
i =1

Theorem 3. Let zi = xi + yi I (i = 1, 2, . . . , n) be a collection of NNs. The aggregated value of the


NNHMO(z1 , z2 , · · · , zn ) operator is also a NN.

Proof.  
n
NNHMO(z1 , z2 , · · · , zn ) = n ∑ (zi )−1 −1
  i =1
n n
= n ∑ x1i + ∑ x (− yi
I −1 ; x  = 0, x  = − y
i i i
i x i + yi )
i =1 i =1
n −y
−n ∑ x ( x +i y ) n n n
i i i − yi
= n
n
+ 
n
 i=1
n n − yi
 I; ∑ 1
xi = 0, ∑ 1
xi = − ∑ x i ( x i + yi )
∑ x1 ∑ x1 ∑ x 1 +
∑ x ( x +y ) i =1 i =1 i =1
i =1 i i =1 i i =1 i i =1 i i i
This shows that NNHMO is also a NN. 

135
Axioms 2018, 7, 12

3.2. NN-Weighted Harmonic Mean Operator (NNWHMO)

Definition 6. Let zi = xi + yi I (i = 1, 2, . . . , n) be a collection of NNs and wi (i = 1, 2, . . . , n) is the weight of


n
zi (i = 1, 2, . . . , n) and ∑ wi = 1. Then the NN-weighted harmonic mean (NNWHMO) is defined as follows:
i =1

  −1
n
w
NNWHMO(z1 , z2 , · · · , zn ) = ∑ zii , zi  = 0 (4)
i =1

Theorem 4. Let zi = xi + yi I (i = 1, 2, . . . , n) be a collection of NNs. The aggregated value of the


NNWHMO(z1 , z2 , · · · , zn ) operator is also a NN.

Proof.
  −1
n
NNWHMO(z1 , z2 , · · · , zn ) = ∑ wz i , zi  = 0
i
 i =1
n  
= ∑ wi x1i + x (− yi
I −1 ; xi = 0, xi = −yi
i x i + yi )
i = 1 
n n
= wi · ∑ x1i + wi · ∑ x (− x
yi
+ y )
I −1 ; xi = 0, xi = −yi
i i i
i =1 i =1
n −y
−wi · ∑ x (x +i y ) n n n n
i=1 i i i − yi
= 1
n + 
n

n n − yi
 I; wi · ∑ 1
xi = 0, wi · ∑ 1
xi = −wi · ∑ x i ( x i + yi )
; ∑ wi = 1
wi · ∑ x1 wi · ∑ x1 wi · ∑ x1 +wi · ∑ x i ( x i + yi )
i=1 i=1 i=1 i=1
i=1 i i=1 i i=1 i i=1
This shows that NNWHMO is also a NN. 

Example 2. Let two NNs be z1 = 3 + 2I and z2 = 2 + I and I ∈ [0, 0.2]. Then:


   
1 1 −1 1 1 −1
NNHMO(z1 , z2 ) = 2 + =2 + = 2.4 + 0.635I.
z1 z2 3 + 2I 2+ I

Example 3. Let two NNs be z1 = 3 + 2I and z2 = 2 + I, I ∈ [0, 0.2] and w1 = 0.4, w2 = 0.6, then:
   
1 1 −1 1 1 −1
NNWHMO(z1 , z2 ) = w1 + w2 = 0.4 + 0.6 = 2.308 + 1.370I.
z1 z2 3 + 2I 2+ I

The NNHMO operator and the NNWHMO operator satisfy the following properties.
P1. Idempotent law: If zi = z for i = 1, 2, . . . , n then, NNHMO(z1 , z2 , · · · , zn ) = z and
NNWHMO(z1 , z2 , · · · , zn ) = z.

n
Proof. For, zi = z, ∑ wi = 1,
i =1
   
n n
n
NNHMO(z1 , z2 , · · · , zn ) = n ∑ ( z i ) −1 −1
=n ∑ ( z ) −1 −1
=
nz−1
= z.
i =1 i =1

  −1   −1   −1
n
w n
w n  
NNWHMO(z1 , z2 , · · · , zn ) = ∑ zii , zi  = 0 = ∑ zi = ∑ wi z −1 −1
= z.
i =1 i =1 i =1

136
Axioms 2018, 7, 12

P2. Boundedness: Both the operators are bounded.

Proof. Let zmin = min(z1 , z2 , · · · , zn ), zmax = max(z1 , z2 , · · · , zn ) for i = 1, 2, . . . , n then,


zmin ≤ NNHMO(z1 , z2 , · · · , zn ) ≤ zmax and zmin ≤ NNWHMO(z1 , z2 , · · · , zn ) ≤ zmax .
Hence, both the operators are bounded. 
P3. Monotonicity: If zi ≤ zi∗ for i = 1, 2, . . . , n then, NNHMO(z1 , z2 , · · · , zn ) ≤ NNHMO(z1∗ , z2∗ , · · · , z∗n )
and NNWHMO(z1 , z2 , · · · , zn ) ≤ NNWHMO(z1∗ , z2∗ , · · · , z∗n ).

Proof. NNHMO(z1 , z2 , · · · , zn ) − NNHMO(z1∗ , z2∗ , · · · , z∗n ) = n


− n
≤ 0, since
z1 + z2 +···+ zn + z1∗ +···+ z1∗
1 1 1 1
z∗ 2 n
1
zi ≤ zi∗ or z1 ≥ z1∗ , for i = 1, 2, . . . , n.
i i
Again,
NNWHMO(z1 , z2 , · · · , zn ) − NNWHMO(z1∗ , z2∗ , · · · , z∗n ) = w1 w2
1
wn − w1 w
1
≤ 0,
z1 + z2 +···+ zn z∗
+ z∗2 +···+ wz∗n
1 2 n
n
since zi ≤ zi∗ or 1
zi ≥ 1
zi∗ , ≤ 0, for zi ≤ zi∗ ; ∑ wi = 1; (i = 1, 2, . . . , n).
i =1
This proves the monotonicity of the functions NNHMO(z1 , z2 , · · · , z n ) and
NNWHMO(z1 , z2 , · · · , zn ). 
P4. Commutativity: If (z1◦ , z2◦ , · · · , z◦n ) be any permutation of (z1 , z1 , · · · , zn ) then,
NNHMO(z1 , z2 , · · · , zn ) = NNHMO(z1◦ , z2◦ , · · · , z◦n ) and NNWHMO(z1 , z2 , · · · , zn ) =
NNWHMO(z1◦ , z2◦ , · · · , z◦n ).
   
n n  
Proof. NNHMO(z1 , z2 , · · · , zn ) − NNHMO(z1◦ , z2◦ , · · · , z◦n ) = n ∑ (zi )−1 −1 − n ∑ zi∗ −1 −1 = 0,
i=1 i=1
because, (z1◦ , z2◦ , · · · , z◦n ) is any permutation of (z1 , z2 , · · · , zn ).
Hence, we have NNHMO(z1 , z2 , · · · , zn ) = NNHMO(z1◦ , z2◦ , · · · , z◦n ).
Again:
 −1  −1
n n  
NNWHMO(z1, z2, · · · , zn ) − NNWHMO(z1◦, z2◦, · · · , z◦n ) = ∑ wi (zi )−1 − ∑ wi zi∗ −1 = 0,
i=1 i=1
because, (z1◦ , z2◦ , · · · , z◦n ) is any permutation of (z1 , z2 , · · · , zn ).
Hence, we have NNWHMO(z1 , z2 , · · · , zn ) = NNWHMO(z1◦ , z2◦ , · · · , z◦n ). 

4. Cosine Function for Determining Unknown Criteria Weights


When criteria weights are completely unknown to decision-makers, the entropy measure [56] can
be used to calculate criteria weights. Biswas et al. [57] employed entropy measure for MADM problems
to determine completely unknown attribute weights of single valued neutrosophic sets (SVNSs).
Literature review reflects that, strategy to determine unknown weights in the NN environment is yet
to appear. In this paper, we propose a cosine function to determine unknown criteria weights.

Definition 7. The cosine function of a NN P = xij + yij I = [xij + yij IL , xij + yij IU ], (i = 1, 2, . . . , m;
j = 1, 2, . . . , n) is defined as follows:
⎛" "⎞
" "
1 n π ⎝"" yij " 
"⎠, xij and yij are not both zeroes

COS j ( P) = ∑ cos  (5)
n i =1 2 "" x2 + y2 ""
ij ij

The weight structure is defined as follows:

COS j ( P) n
wj =
∑nj=1 COS j ( P)
; j = 1, 2, · · · , n & ∑ wj = 1 (6)
j =1

137
Axioms 2018, 7, 12

The cosine function COS j ( P) satisfies the following properties:

P1. COS j ( P) = 1, if yij = 0 and xij = 0.


P2. COS j ( P) = 0, if xij = 0 and yij = 0.
P3. COS j ( P) ≥ COS j ( Q), if xij of P > xij of Q or yij of P < yij of Q or both.

Proof.
n
P1. yij = 0 ⇒ COS j ( P) = 1
n ∑ [cos 0] = 1
i =1
n , -
P2. xij = 0 ⇒ COS j ( P) = 1
n ∑ cos π2 = 0
i =1
P3. For, xij of P > xij of Q

⇒ Determinate part of P > Determinate part of Q


⇒ COS j ( Q) < COS j ( P).
For, yij of P < yij of Q
⇒ Indeterminacy part of P < Indeterminacy part of Q
⇒ COS j ( Q) > COS j ( P).
For, xij of P > xij of Q and yij of P < yij of Q
⇒ (Real part of P > Real part of Q) & (Indeterminacy part of P < Indeterminacy part of Q)
⇒ COS j ( Q) > COS j ( P). 

Example 4. Let two NNs be z1 = 3 + 2I, and z2 = 3 + 5I, then, COS(z1 ) = 0.9066, COS(z2 ) = 0.7817.

Example 5. Let two NNs be z1 = 3 + I, and z2 = 7 + I, then, COS(z1 ) = 0.9693, COS(z2 ) = 0.9938.

Example 6. Let two NNs be z1 = 10 + 2I, and z2 = 2 + 10I, then, COS(z1 ) = 0.9882, COS(z2 ) = 0.7178.

5. Multi-Criteria Group Decision-Making Strategies Based on NNHMO and NNWHMO


Two MCGDM strategies using the NNHMO and NNWHMO respectively are developed in this
section. Suppose that A = {A1 , A2 , . . . , Am } is a set of alternatives, C = {C1 , C2 , . . . , Cn } is a set of
criteria and DM = {DM1 , DM2 , . . . , DMk } is a set of decision-makers. Decision-makers’ assessment for
each alternative Ai will be based on each criterion Cj . All the assessment values are expressed by NNs.
Steps of decision making strategies based on proposed NNHMO and NNWHMO to solve MCGDM
problems are presented below.

5.1. MCGDM Strategy 1 (Based on NNHMO)


Strategy 1 is presented (see Figure 1) using the following six steps:
Step 1. Determine the relation between alternatives and criteria.
Each decision-maker forms a NN decision matrix. The relation between the alternative Ai
(i = 1, 2, . . . , m) and the criterion Cj (j = 1, 2, . . . , n) is presented in Equation (7).
⎛ ⎞
C1 C2 ··· Cn
⎜  x11 +y11 I k  x12 +y12 I k ···  x1n +y1n I k ⎟
A1 ⎜ ⎟
⎜ ⎟
DMk [ A|C ] = A2 ⎜  x21 +y21 I k  x22 +y22 I k ···  x2n +y2n I k ⎟ (7)
⎜ ⎟
.. ⎜ .. .. .. .. ⎟
. ⎝ . . . . ⎠
Am  x m1 +ym1 I k  x m2 +ym2 I k  x mn +ymn I k

138
Axioms 2018, 7, 12

 
Note 1: Here, xij + yij I k represents the NN rating value of the alternative Ai with respect to the
criterion Cj for the decision-maker DMk .
aggr
Step 2. Using Equation (3), determine the aggregation values (DMk ( Ai ) ), (i = 1, 2, . . . , n) for all
decision matrices.
aggr
Step 3. To fuse all the aggregation values (DMk ( Ai ) ), corresponding to alternatives Ai , we define
the averaging function as follows:

k k
∑ wt ( DMt ∑ wt = 1. (i = 1, 2,
aggr
DM aggr ( Ai ) = ( Ai )); . . . , n; t = 1, 2, . . . , k ) (8)
t =1 t =1

Here, wt (t = 1, 2, . . . , k) is the weight of the decision-maker DMt .


Step 4. Determine the preference ranking order.
Using Equation (1), determine the score values Sc(zi ) (accuracy degrees Ac(zi ), if necessary)
(i = 1, 2, . . . , m) of all alternatives Ai . All the score values are arranged in descending order.
The alternative corresponding to the highest score value (accuracy values) reflects the best choice.
Step 5. Select the best alternative from the preference ranking order.
Step 6. End.

Determine the relation


Determine the
Start between alternatives and
aggregation values
criteria

Calculate the Determine the


Select the best
averaging preference ranking End
alternative
functional value order

Figure 1. Steps of MCGDM Strategy 1 based on NNHMO.

5.2. MCGDM Strategy 2 (Based on NNWHMO)


Strategy 2 is presented (see Figure 2) using the following seven steps:
Step 1. This step is similar to the first step of Strategy 1.
Step 2. Determine the criteria weights.
Using Equation (6), determine the criteria weights from decision matrices ( DMt [ A|C ] ),
(t = 1, 2, . . . , k).
waggr
Step 3. Determine the weighted aggregation values (DMk ( Ai ) ).
waggr
Using Equation (4), determine the weighted aggregation values (DMk ( Ai ) ), (i = 1, 2, . . . , n)
for all decision matrices.

139
Axioms 2018, 7, 12

Step 4. Determine the averaging values.


waggr
To fuse all the weighted aggregation values (DMk ( Ai ) ), corresponding to alternatives Ai ,
we define the averaging function as follows:

k
∑ wt ( DMt
waggr
DMwaggr ( Ai ) = ( Ai ))(i = 1, 2, . . . , n; t = 1, 2, . . . , k) (9)
t =1

Here, wt (t = 1, 2, . . . , k) is the weight of the decision maker DMt .


Step 5. Determine the ranking order.
Using Equation (1), determine the score values S(zi ) (accuracy degrees A(zi ), if necessary)
(i = 1, 2, . . . , m) of all alternatives Ai . All the score values are arranged in descending order. The
alternative corresponding to the highest score value (accuracy values) reflects the best choice.
Step 6. Select the best alternative from the preference ranking order.
Step 7. End.

Determine the relation between Determine the


Start
alternatives and criteria criteria weights

Determine the Calculate the


weighted aggregation averaging functional
values value

Determine the
Select the best
preference End
alternative
ranking order

Figure 2. Steps of MCGDM strategy based on NNWHMO.

6. Simulation Results
We solve a numerical example studied by Zheng et al. [54]. An investment company desires
to invest a sum of money in the best investment fund. There are four possible selection options to
invest the money. Feasible selection options are namely, A1 : Car company (CARC); A2 : Food company
(FOODC); A3 : Computer company (COMC); A4 : Arms company (ARMC). Decision-making must
be based on the three criteria namely, risk analysis (C1 ), growth analysis (C2 ), environmental impact
analysis (C3 ). The four possible selection options/alternatives are to be selected under the criteria by
the NN assessments provided by the three decision-makers DM1 , DM2 , and DM3 .

140
Axioms 2018, 7, 12

6.1. Solution Using MCGDM Strategy 1

Step 1. Determine the relation between alternatives and criteria.


All assessment values are provided by the following three NN based decision matrices (shown in
Equations (10)–(12).
⎛ ⎞
C1 C2 C3

A1 ⎜ 4 + I 3+I ⎟
5 ⎟
⎜ ⎟
DM1 [ L|C ] = A2 ⎜ 6 6 5 ⎟ (10)
⎜ ⎟
A3 ⎝ 3 5+ I 6 ⎠
A4 7 6 4+ I
⎛ ⎞
C1 C2 C3

A1 ⎜ 5 4 ⎟
4 ⎟
⎜ ⎟
DM2 [ L|C ] = A2 ⎜ 5 + I 6 6 ⎟ (11)
⎜ ⎟
A3 ⎝ 4 5 5+ I ⎠
A4 6+ I 6 5
⎛ ⎞
C1 C2 C3
A1 ⎜⎜ 4 5+ I 4 ⎟ ⎟
⎜ ⎟
DM3 [ L|C ] = A2 ⎜ 6 7 5+ I ⎟ (12)
⎜ ⎟
A3 ⎝ 4 + I 5 6 ⎠
A4 8 6 4+ I
Note 2: Here, DM1 [ L|C ] , DM2 [ L|C ] and DM3 [ L|C ] are the decision matrices for the decision makers
DM1 , DM2 and DM3 respectively.
aggr
Step 2. Determine the weighted aggregation values (DMk ( Ai ) ).
aggr
Using Equation (3), we calculate the aggregation values (DMk ( Ai ) ) as follows:
aggr aggr aggr aggr
DM1 ( A1 ) = 3.829 + 0.785I; DM1 ( A2 ) = 5.625; DM1 ( A3 ) = 4.285 + 0.214I; DM1 ( A4 ) = 5.362 + 0.514I;

aggr aggr aggr aggr


DM2 ( A1 ) = 4.285; DM2 ( A2 ) = 5.206 + 0.415I; DM2 ( A3 ) = 4.196 + 0.532I; DM2 ( A4 ) = 5.234 + 0.618I;
aggr aggr aggr aggr
DM3 ( A1 ) = 4.019 + 0.605I; DM3 ( A2 ) = 5.817 + 0.433I; DM3 ( A3 ) = 4.876 + 0.387I; DM3 ( A4 ) = 6.023 + 0.257I.

Step 3. Determine the averaging values.


Using Equation (8), we calculate the averaging values (Considering equal importance of all the
decision makers) to fuse all the aggregation values corresponding to the alternative Ai .

DM aggr ( A1 ) = 4.044 + 0.463I; DM aggr ( A2 ) = 5.549 + 0.282I; DM aggr ( A3 ) = 4.452 + 0.378I; DM aggr ( A4 ) = 5.539 + 0.463I.

Step 4. Using Equation (1), we calculate the score values Sc(Ai ) (i = 1, 2, 3, 4). Sensitivity analysis and
ranking order of alternatives are shown in Table 1 for different values of I.

Table 1. Sensitivity analysis and ranking order with variation of “I” on NNs for strategy 1.

I Sc(Ai ) Ranking Order


I = [0, 0] S(A1 ) = 0.4988, S(A2 ) = 0.4993, S(A3 ) = 0.4982, S(A4 ) = 0.4983 A2  A1  A4  A3
I ∈ [0, 0.2] S(A1 ) = 0.5081, S(A2 ) = 0.5144, S(A3 ) = 0.5067, S(A4 ) = 0.5056 A2  A1  A3  A4
I ∈ [0, 0.4] S(A1 ) = 0.5182, S(A2 ) = 0.5195, S(A3 ) = 0.5151, S(A4 ) = 0.5249 A2  A1  A4  A3
I ∈ [0, 0.6] S(A1 ) = 0.5289, S(A2 ) = 0.5346, S(A3 ) = 0.5236, S(A4 ) = 0.5233 A2  A1  A3  A4
I ∈ [0, 0.8] S(A1 ) = 0.5396, S(A2 ) = 0.5497, S(A3 ) = 0.5320, S(A4 ) = 0.5316 A2  A1  A3  A4
I ∈ [0, 1] S(A1 ) = 0.5503, S(A2 ) = 0.5547, S(A3 ) = 0.5405, S(A4 ) = 0.5399 A2  A1  A3  A4

141
Axioms 2018, 7, 12

Step 5. Food company (FOODC) is the best alternative for investment.


Step 6. End.
Note 3: In Figure 3, we represent ranking order of alternatives with variation of “I” based on
strategy 1. Figure 3 reflects that various values of I, ranking order of alternatives are different. However,
the best choice is the same.

0.56
I = [0, 0]
0.54
0.52 I = [0, 0.2]

0.5 I = [0, 0.4]


0.48 I = [0, 0.6]
0.46 I = [0, 0.8]
CARC FOODC COMC ARMC

Figure 3. Ranking order with variation of “I” based on strategy 1.

6.2. Solution Using MCGDM Strategy 2

Step 1. Determine the relation between alternatives and criteria.


This step is similar to the first step of strategy 1.
Step 2. Determine the criteria weights.
Using Equations (5) and (6), criteria weights are calculated as follows:

[w1 = 0.3265, w2 = 0.3430, w3 = 0.3305] for DM1 ,

[w1 = 0.3332, w2 = 0.3334, w3 = 0.3334] for DM2 ,

[w1 = 0.3333, w2 = 0.3335, w3 = 0.3332] for DM3 .

waggr
Step 3. Determine the weighted aggregation values (DMk ( Ai ) ).
aggr
Using Equation (4), we calculate the aggregation values (DMk ( Ai ) ) as follows:
aggr aggr aggr aggr
DM1 ( A1 ) = 3.861 + 0.774I; DM1 ( A2 ) = 6.006; DM1 ( A3 ) = 4.307 + 0.234I; DM1 ( A4 ) = 5.399 + 0.541I;

aggr aggr aggr aggr


DM2 ( A1 ) = 4.288; DM2 ( A2 ) = 5.219 + 0.429I; DM2 ( A3 ) = 4.206 + 0.541I; DM2 ( A4 ) = 5.251 + 0.629I;
aggr aggr aggr aggr
DM3 ( A1 ) = 4.024 + 0.616I; DM3 ( A2 ) = 5.824 + 0.445I; DM3 ( A3 ) = 4.889 + 0.393I; DM3 ( A4 ) = 6.029 + 0.265I.

Step 4. Determine the averaging values.


Using Equation (9), we calculate the averaging (Considering equal importance of all the decision
makers to fuse all the aggregation values corresponding to the alternative Ai .

DM aggr ( A1 ) = 4.057 + 0.463I; DM aggr ( A2 ) = 5.568 + 0.291I; DM aggr ( A3 ) = 4.467 + 0.389I; DM aggr ( A4 ) = 5.559 + 0.478I.

Step 5. Determine the ranking order.

142
Axioms 2018, 7, 12

Using Equation (1), we calculate the score values Sc(Ai ) (i = 1, 2, 3, 4). Since scores values are
different, accuracy values are not required. Sensitivity analysis and ranking order of alternatives are
shown in Table 2 for different values of I.

Table 2. Sensitivity analysis and ranking order with variation of “I” on NNs for strategy 2.

I Sc(Ai ) Ranking Order


I=0 S(A1 ) = 0.4968, S(A2 ) = 0.4993, S(A3 ) = 0.4981, S(A4 ) = 0.4982 A2  A4  A3  A1
I ∈ [0, 0.2] S(A1 ) = 0.5081, S(A2 ) = 0.5095, S(A3 ) = 0.5068, S(A4 ) = 0.5067 A2  A1  A4  A3
I ∈ [0, 0.4] S(A1 ) = 0.5195, S(A2 ) = 0.5198, S(A3 ) = 0.5155, S(A4 ) = 0.5153 A2  A1  A3  A4
I ∈ [0, 0.6] S(A1 ) = 0.5308, S(A2 ) = 0.5350, S(A3 ) = 0.5241, S(A4 ) = 0.5239 A2  A1  A3  A4
I ∈ [0, 0.8] S(A1 ) = 0.5421, S(A2 ) = 0.5502, S(A3 ) = 0.5328, S(A4 ) = 0.5324 A2  A1  A3  A4
I ∈ [0, 1] S(A1 ) = 0.5535, S(A2 ) = 0.5654, S(A3 ) = 0.5415, S(A4 ) = 0.5410 A2  A1  A3  A4

Step 6. Food company (FOODC) is the best alternative for investment.


Step 7. End.
Note 4: In Figure 4, we represent ranking order of alternatives with variation of “I” based on
strategy 2. Figure 4 reflects that various values of I, ranking order of alternatives are different. However,
the best choice is the same.

0.58
I = [0, 0]
0.56
I = [0, 0.2]
0.54
0.52 I = [0, 0.4]
0.5 I = [0, 0.6]
0.48 I = [0, 0.8]
0.46
I = [0, 1]
CARC FOODC COMC ARMC

Figure 4. Ranking order with variation of “I” on NNs for Strategy 2.

7. Comparison Analysis and Contributions of the Proposed Approach

7.1. Comparison Analysis


In this subsection, a comparison analysis is conducted between the proposed MCGDM strategies
and the other existing strategies in the literature in NN environment. Table 1 reflects that A2 is the best
alternative for I = 0 and I = 0 i.e., for all cases considered. Table 2 reflects that A2 is the best alternative
for any values of I. Ranking order differs for different values of I.
The ranking results obtained from the existing strategies [52–54] are furnished in Table 3.
The ranking orders of Ye [52] and Zheng et al. [54] are similar for all values of I considered. When I
lies in [0, 0], [0, 0.2], [0, 0.4], A2 is the best alternative for [52–54] and the proposed strategies. When
I lies in [0, 0.6], [0, 0.8], [0, 1], A4 is the best alternative for [52,54], whereas A2 is the best alternative
for [53], and the proposed strategies.

143
Axioms 2018, 7, 12

Table 3. Comparison of ranking preference order with variation of “I” on NNs for different strategies.

I Ye [52] Zheng et al. [54] Liu and Liu [53] Proposed Strategy 1 Proposed Strategy 2
[0, 0] A2  A4  A3  A1 A2  A4  A3  A1 A2  A4  A1  A3 A2  A1  A4  A3 A2  A4  A3  A1
[0, 0.2] A2  A4  A3  A1 A2  A4  A3  A1 A2  A3  A1  A4 A2  A1  A3  A4 A2  A1  A4  A3
[0, 0.4] A2  A4  A3  A1 A2  A4  A3  A1 A2  A3  A4  A1 A2  A1  A4  A3 A2  A1  A3  A4
[0, 0.6] A4  A2  A3  A1 A4  A2  A3  A1 A2  A3  A4  A1 A2  A1  A3  A4 A2  A1  A3  A4
[0, 0.8] A4  A2  A3  A1 A4  A2  A3  A1 A2  A3  A4  A1 A2  A1  A3  A4 A2  A1  A3  A4
[0, 1] A4  A2  A3  A1 A4  A2  A3  A1 A2  A4  A3  A1 A2  A1  A3  A4 A2  A1  A3  A4

In strategy [52], deneutrosophication process is analyzed. It does not recognize the importance of
the aggregation information. MCGDM due to Liu and Liu [53] is based on NN generalized weighted
power averaging operator. This strategy cannot deal the situation when larger value other than
arithmetic mean, geometric mean, and harmonic mean is necessary for experimental purpose.
The strategy proposed by Zheng et al. [54] cannot be used when few observations contribute
disproportionate amount to the arithmetic mean. The proposed two MCGDM strategies are free from
these shortcomings.

7.2. Contributions of the Proposed Approach


• NNHMO and NNWHMO in NN environment are firstly defined in the literature. We have also
proved their basic properties.
• We have proposed score and accuracy functions of NN numbers for ranking. If two score values
are same, then accuracy function can be used for ranking purpose.
• The proposed two strategies can also be used when observations/experiments contribute is
disproportionate amount to the arithmetic mean. The harmonic mean is used when sample values
contain fractions and/or extreme values (either too small or too big).
• To calculate unknown weights structure of criteria in NN environment, we have proposed
cosine function.
• Steps and calculations of the proposed strategies are easy to use.
• We have solved a numerical example to show the feasibility, applicability, and effectiveness of the
proposed two strategies.

8. Conclusions
In the study, we have proposed NNHMO and NNWHMO. We have developed two strategies of
ranking NNs based on proposed score and accuracy functions. We have proposed a cosine function
to determine unknown weights of the criteria in a NN environment. We have developed two novel
MCGDM strategies based on the proposed aggregation operators. We have solved a hypothetical case
study and compared the obtained results with other existing strategies to demonstrate the effectiveness
of the proposed MCGDM strategies. Sensitivity analysis for different values of I is also conducted to
show the influence of I in preference ranking of the alternatives. The proposed MCGDM strategies can
be applied in supply selection, pattern recognition, cluster analysis, medical diagnosis, etc.

Acknowledgments: The authors are very grateful to the anonymous reviewers for their insightful and constructive
comments and suggestions that have led to an improved version of this paper.
Author Contributions: Kalyan Mondal and Surapati Pramanik conceived and designed the experiments;
Kalyan Mondal, and Surapati Pramanik performed the experiments; Surapati Pramanik, Bibhas C. Giri,
and Florentine Smarandache analyzed the data; Kalyan Mondal, Surapati Pramanik, Bibhas C. Giri, and Florentine
Smarandache contributed to the analysis tools; and Kalyan Mondal and Surapati Pramanik wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest.

144
Axioms 2018, 7, 12

References
1. Hwang, L.; Lin, M.J. Group Decision Making under Multiple Criteria: Methods and Applications; Springer:
Heidelberg, Germany, 1987.
2. Hwang, C.; Yoon, K. Multiple Attribute Decision Making, Methods and Applications; Springer: New York, NY,
USA, 1981; Volume 186.
3. Chen, S.J.; Hwang, C.L. Fuzzy Multiple Attribute Decision-Making, Methods and Applications; Lecture Notes in
Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1992; Volume 375.
4. Chang, T.H.; Wang, T.C. Using the fuzzy multi-criteria decision making approach for measuring the
possibility of successful knowledge management. Inf. Sci. 2009, 179, 355–370. [CrossRef]
5. Krohling, R.A.; De Souza, T.T.M. Combining prospect theory and fuzzy numbers to multi-criteria decision
making. Exp. Syst. Appl. 2012, 39, 11487–11493. [CrossRef]
6. Chen, C.T. Extension of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst.
2000, 114, 1–9. [CrossRef]
7. Zhang, G.; Lu, J. An integrated group decision-making method dealing with fuzzy preferences for
alternatives and individual judgments for selection criteria. Group Decis. Negot. 2003, 12, 501–515. [CrossRef]
8. Krohling, R.A.; Campanharo, V.C. Fuzzy TOPSIS for group decision making: A case study for accidents with
oil spill in the sea. Exp. Syst. Appl. 2011, 38, 4190–4197. [CrossRef]
9. Xia, M.; Xu, Z. A novel method for fuzzy multi-criteria decision making. Int. J. Inf. Technol. Decis. Mak. 2014,
13, 497–519. [CrossRef]
10. Mehlawat, M.K.; Guptal, P. A new fuzzy group multi-criteria decision making method with an application
to the critical path selection. Int. J. Adv. Manuf. Technol. 2015. [CrossRef]
11. Lin, L.; Yuan, X.H.; Xia, Z.Q. Multicriteria fuzzy decision-making based on intuitionistic fuzzy sets. J. Comput.
Syst. Sci. 2007, 73, 84–88. [CrossRef]
12. Liu, H.W.; Wang, G.J. Multi-criteria decision-making methods based on intuitionistic fuzzy sets. Eur. J.
Oper. Res. 2007, 179, 220–233. [CrossRef]
13. Pramanik, S.; Mondal, K. Weighted fuzzy similarity measure based on tangent function and its application
to medical diagnosis. Int. J. Innov. Res. Sci. Eng. Technol. 2015, 4, 158–164.
14. Pramanik, S.; Mukhopadhyaya, D. Grey relational analysis based intuitionistic fuzzy multi-criteria group
decision-making approach for teacher selection in higher education. Int. J. Comput. Appl. 2011, 34, 21–29.
[CrossRef]
15. Mondal, K.; Pramanik, S. Intuitionistic fuzzy multi criteria group decision making approach to quality-brick
selection problem. J. Appl. Quant. Methods 2014, 9, 35–50.
16. Dey, P.P.; Pramanik, S.; Giri, B.C. Multi-criteria group decision making in intuitionistic fuzzy environment
based on grey relational analysis for weaver selection in Khadi institution. J. Appl. Quant. Methods 2015, 10,
1–14.
17. Ye, J. Multicriteria fuzzy decision-making method based on the intuitionistic fuzzy cross-entropy.
In Proceedings of the International Conference on Intelligent Human-Machine Systems and Cybernetics,
Hangzhou, China, 26–27 August 2009.
18. Chen, S.M.; Chang, C.H. A novel similarity measure between Atanassov’s intuitionistic fuzzy sets based on
transformation techniques with applications to pattern recognition. Inf. Sci. 2015, 291, 96–114. [CrossRef]
19. Chen, S.M.; Cheng, S.H.; Chiou, C.H. Fuzzy multi-attribute group decision making based on intuitionistic
fuzzy sets and evidential reasoning methodology. Inf. Fusion 2016, 27, 215–227. [CrossRef]
20. Wang, J.Q.; Han, Z.Q.; Zhang, H.Y. Multi-criteria group decision making method based on intuitionistic
interval fuzzy information. Group Decis. Negot. 2014, 23, 715–733. [CrossRef]
21. Yue, Z.L. TOPSIS-based group decision-making methodology in intuitionistic fuzzy setting. Inf. Sci. 2014,
277, 141–153. [CrossRef]
22. He, X.; Liu, W.F. An intuitionistic fuzzy multi-attribute decision-making method with preference on
alternatives. Oper. Res Manag. Sci. 2013, 22, 36–40.
23. Mondal, K.; Pramanik, S. Intuitionistic fuzzy similarity measure based on tangent function and its application
to multi-attribute decision making. Glob. J. Adv. Res. 2015, 2, 464–471.

145
Axioms 2018, 7, 12

24. Peng, H.G.; Wang, J.Q.; Cheng, P.F. A linguistic intuitionistic multi-criteria decision-making method based
on the Frank Heronian mean operator and its application in evaluating coal mine safety. Int. J. Mach. Learn.
Cybern. 2016. [CrossRef]
25. Liang, R.X.; Wang, J.Q.; Zhang, H.Y. A multi-criteria decision-making method based on single-valued
trapezoidal neutrosophic preference relations with complete weight information. Neural Comput. Appl. 2017.
[CrossRef]
26. Wang, J.Q.; Yang, Y.; Li, L. Multi-criteria decision-making method based on single-valued neutrosophic
linguistic Maclaurin symmetric mean operators. Neural Comput. Appl. 2016. [CrossRef]
27. Kharal, A. A neutrosophic multi-criteria decision making method. New Math. Nat. Comput. 2014, 10, 143–162.
[CrossRef]
28. Ye, J. Multiple attribute group decision-making method with completely unknown weights based on
similarity measures under single valued neutrosophic environment. J. Intell. Fuzzy Syst. 2014, 27, 2927–2935.
29. Mondal, K.; Pramanik, S. Multi-criteria group decision making approach for teacher recruitment in higher
education under simplified neutrosophic environment. Neutrosophic Sets Syst. 2014, 6, 28–34.
30. Biswas, P.; Pramanik, S.; Giri, B.C. A new methodology for neutrosophic multi-attribute decision-making
with unknown weight information. Neutrosophic Sets Syst. 2014, 3, 44–54.
31. Biswas, P.; Pramanik, S.; Giri, B.C. Cosine similarity measure based multi-attribute decision-making with
trapezoidal fuzzy neutrosophic numbers. Neutrosophic Sets Syst. 2014, 8, 46–56.
32. Mondal, K.; Pramanik, S. Neutrosophic decision making model for clay-brick selection in construction field
based on grey relational analysis. Neutrosophic Sets Syst. 2015, 9, 64–71.
33. Mondal, K.; Pramanik, S. Neutrosophic tangent similarity measure and its application to multiple attribute
decision making. Neutrosophic Sets Syst. 2015, 9, 85–92.
34. Pramanik, S.; Biswas, P.; Giri, B.C. Hybrid vector similarity measures and their applications to multi-attribute
decision making under neutrosophic environment. Neural Comput. Appl. 2015. [CrossRef]
35. Sahin, R.; Küçük, A. Subsethood measure for single valued neutrosophic sets. J. Intell. Fuzzy Syst. 2015, 29,
525–530. [CrossRef]
36. Mondal, K.; Pramanik, S. Neutrosophic decision making model of school choice. Neutrosophic Sets Syst. 2015,
7, 62–68.
37. Ye, J. An extended TOPSIS method for multiple attribute group decision making based on single valued
neutrosophic linguistic numbers. J. Intell. Fuzzy Syst. 2015, 28, 247–255.
38. Biswas, P.; Pramanik, S.; Giri, B.C. TOPSIS method for multi-attribute group decision-making under
single-valued neutrosophic environment. Neural Comput. Appl. 2016, 27, 727–737. [CrossRef]
39. Biswas, P.; Pramanik, S.; Giri, B.C. Value and ambiguity index based ranking method of single-valued
trapezoidal neutrosophic numbers and its application to multi-attribute decision making. Neutrosophic Sets Syst.
2016, 12, 127–138.
40. Biswas, P.; Pramanik, S.; Giri, B.C. Aggregation of triangular fuzzy neutrosophic set information and its
application to multi-attribute decision making. Neutrosophic Sets Syst. 2016, 12, 20–40.
41. Smarandache, F.; Pramanik, S. (Eds.) New Trends in Neutrosophic Theory and Applications; Pons Editions:
Brussels, Belgium, 2016; pp. 15–161; ISBN 978-1-59973-498-9.
42. Sahin, R.; Liu, P. Maximizing deviation method for neutrosophic multiple attribute decision making with
incomplete weight information. Neural Comput. Appl. 2016, 27, 2017–2029. [CrossRef]
43. Pramanik, S.; Dalapati, S.; Alam, S.; Smarandache, S.; Roy, T.K. NS-cross entropy based MAGDM under
single valued neutrosophic set environment. Information 2018, 9, 37. [CrossRef]
44. Sahin, R.; Liu, P. Possibility-induced simplified neutrosophic aggregation operators and their application to
multi-criteria group decision-making. J. Exp. Theor. Artif. Intell. 2017, 29, 769–785. [CrossRef]
45. Biswas, P.; Pramanik, S.; Giri, B.C. Multi-attribute group decision making based on expected value of
neutrosophic trapezoidal numbers. New Math. Nat. Comput. 2017, in press.
46. Smarandache, F. Introduction to Neutrosophic Measure, Neutrosophic Integral, and Neutrosophic, Probability; Sitech
& Education Publisher: Craiova, Romania, 2013.
47. Smarandache, F. Introduction to Neutrosophic Statistics; Sitech & Education Publishing: Columbus, OH, USA, 2014.
48. Xu, Z. Fuzzy harmonic mean operators. Int. J. Intell. Syst. 2009, 24, 152–172. [CrossRef]
49. Park, J.H.; Gwak, M.G.; Kwun, Y.C. Uncertain linguistic harmonic mean operators and their applications to
multiple attribute group decision making. Computing 2011, 93, 47–64. [CrossRef]

146
Axioms 2018, 7, 12

50. Wei, G.W. FIOWHM operator and its application to multiple attribute group decision making. Expert Syst. Appl.
2011, 38, 2984–2989. [CrossRef]
51. Ye, J. Simplified neutrosophic harmonic averaging projection-based strategy for multiple attribute
decision-making problems. Int. J. Mach. Learn. Cybern. 2017, 8, 981–987. [CrossRef]
52. Ye, J. Multiple attribute group decision making method under neutrosophic number environment. J. Intell. Syst.
2016, 25, 377–386. [CrossRef]
53. Liu, P.; Liu, X. The neutrosophic number generalized weighted power averaging operator and its application
in multiple attribute group decision making. Int. J. Mach. Learn. Cybern. 2016, 1–12. [CrossRef]
54. Zheng, E.; Teng, F.; Liu, P. Multiple attribute group decision-making strategy based on neutrosophic number
generalized hybrid weighted averaging operator. Neural Comput. Appl. 2017, 28, 2063–2074. [CrossRef]
55. Pramanik, S.; Roy, R.; Roy, T.K. Teacher selection strategy based on bidirectional projection measure in
neutrosophic number environment. In Neutrosophic Operational Research; Smarandache, F., Abdel-Basset, M.,
El-Henawy, I., Eds.; Pons Publishing House: Bruxelles, Belgium, 2017; Volume 2; ISBN 978-1-59973-537-5.
56. Majumdar, P.; Samanta, S.K. On similarity and entropy of neutrosophic sets. J. Intell. Fuzzy Syst. 2014,
26, 1245–1252.
57. Biswas, P.; Pramanik, S.; Giri, B.C. Entropy based grey relational analysis strategy for multi-attribute
decision-making under single valued neutrosophic assessments. Neutrosophic Sets. Syst. 2014, 2, 102–110.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

147
axioms
Article
Rough Neutrosophic Digraphs with Application
Sidra Sayed 1 , Nabeela Ishfaq 1 , Muhammad Akram 1, * and Florentin Smarandache 2
1 Department of Mathematics, University of the Punjab, New Campus, Lahore 54590, Pakistan;
[email protected] (S.S.); [email protected] (N.I.)
2 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
* Correspondence: [email protected]; Tel.: +92-42-99231241

Received: 5 December 2017; Accepted: 15 January 2018; Published: 18 January 2018

Abstract: A rough neutrosophic set model is a hybrid model which deals with vagueness by using
the lower and upper approximation spaces. In this research paper, we apply the concept of rough
neutrosophic sets to graphs. We introduce rough neutrosophic digraphs and describe methods of
their construction. Moreover, we present the concept of self complementary rough neutrosophic
digraphs. Finally, we consider an application of rough neutrosophic digraphs in decision-making.

Keywords: rough neutrosophic sets; rough neutrosophic digraphs; decision-making

MSC: 03E72, 68R10, 68R05

1. Introduction
Smarandache [1] proposed the concept of neutrosophic sets as an extension of fuzzy sets [2].
A neutrosophic set has three components, namely, truth membership, indeterminacy membership
and falsity membership, in which each membership value is a real standard or non-standard subset
of the nonstandard unit interval ]0−, 1 + [ ([3]), where 0− = 0 − , 1+ = 1 + , is an infinitesimal
number > 0. To apply neutrosophic set in real-life problems more conveniently, Smarandache [3] and
Wang et al. [4] defined single-valued neutrosophic sets which takes the value from the subset of [0, 1].
Actually, the single valued neutrosophic set was introduced for the first time by Smarandache in 1998
in [3]. Ye [5] considered multicriteria decision-making method using the correlation coefficient under
single-valued neutrosophic environment. Ye [6] also presented improved correlation coefficients of
single valued neutrosophic sets and interval neutrosophic sets for multiple attribute decision making.
Rough set theory was proposed by Pawlak [7] in 1982. Rough set theory is useful to study
the intelligence systems containing incomplete, uncertain or inexact information. The lower and
upper approximation operators of rough sets are used for managing hidden information in a system.
Therefore, many hybrid models have been built, such as soft rough sets, rough fuzzy sets, fuzzy
rough sets, soft fuzzy rough sets, neutrosophic rough sets, andrough neutrosophic sets, for handling
uncertainty and incomplete information effectively. Dubois and Prade [8] introduced the notions
of rough fuzzy sets and fuzzy rough sets. Liu and Chen [9] have studied different decision-making
methods. Broumi et al. [10] introduced the concept of rough neutrosophic sets. Yang et al. [11]
proposed single valued neutrosophic rough sets by combining single valued neutrosophic sets
and rough sets, and established an algorithm for decision-making problem based on single valued
neutrosophic rough sets on two universes. Mordeson and Peng [12] presented operations on
fuzzy graphs. Akram et al. [13–16] considered several new concepts of neutrosophic graphs with
applications. Zafer and Akram [17] introduced a novel decision-making method based on rough
fuzzy information. In this research study, we apply the concept of rough neutrosophic sets to graphs.
We introduce rough neutrosophic digraphs and describe methods of their construction. Moreover,

Axioms 2018, 7, 5; doi:10.3390/axioms7010005 148 www.mdpi.com/journal/axioms


Axioms 2018, 7, 5

we present the concept of self complementary rough neutrosophic digraphs. We also present an
application of rough neutrosophic digraphs in decision-making.
We have used standard definitions and terminologies in this paper. For other notations,
terminologies and applications not mentioned in the paper, the readers are referred to [18–22].

2. Rough Neutrosophic Digraphs


Definition 1. [4] Let Z be a nonempty universe. A neutrosophic set N on Z is defined as follows:

N = {< x : μ N ( x ), σN ( x ), λ N ( x ) >, x ∈ Z }

where the functions μ, σ, λ :Z → [0, 1] represent the degree of membership, the degree of indeterminacy and the
degree of falsity.

Definition 2. [7] Let Z be a nonempty universe and R an equivalence relation on Z.A pair ( Z, R) is called an
approximation space. Let N ∗ be a subset of Z and the lower and upper approximations of N ∗ in the approximation
space ( Z, R) denoted by RN ∗ and RN ∗ are defined as follows:

RN ∗ = { x ∈ Z |[ x ] R ⊆ N ∗ },
RN ∗ = { x ∈ Z |[ x ] R ⊆ N ∗ },

where [ x ] R denotes the equivalence class of R containing x. A pair ( RN ∗ , RN ∗ ) is called a rough set.

Definition 3. [10] Let Z be a nonempty universe and R an equivalence relation on Z. Let N be a neutrosophic
set(NS) on Z. The lower and upper approximations of N in the approximation space ( Z, R) denoted by RN and
RN are defined as follows:

RN = {< x, μ R( N ) ( x ), σR( N ) ( x ), λ R( N ) ( x ) >: y ∈ [ x ] R , x ∈ Z },


RN = {< x, μ R( N ) ( x ), σR( N ) ( x ), λ R( N ) ( x ) >: y ∈ [ x ] R , x ∈ Z },

where,
0 1
μ R( N ) ( x ) = μ N ( y ), μ R( N ) ( x ) = μ N ( y ),
y∈[ x ] R y∈[ x ] R
0 1
σR( N ) ( x ) = σN (y), σR( N ) ( x ) = σN (y,
y∈[ x ] R y∈[ x ] R
1 0
λ R( N ) ( x ) = λ N ( y ), λ R( N ) ( x ) = λ N ( y ).
y∈[ x ] R y∈[ x ] R

A pair ( RN, RN ) is called a rough neutrosophic set.

We now define the concept of rough neutrosophic digraph.

Definition 4. Let V ∗ be a nonempty set and R an equivalence relation on V ∗ . Let V be a NS on V ∗ , defined as

V = {< x, μV ( x ), σV ( x ), λV ( x ) >: x ∈ V ∗ }.

Then, the lower and upper approximations of V represented by RV and RV, respectively, are characterized
as NSs in V ∗ such that ∀ x ∈ V ∗ ,

R(V ) = {< x, μ R(V ) ( x ), σR(V ) ( x ), λ R(V ) ( x ) >: y ∈ [ x ] R },


R(V ) = {< x, μ R(V ) ( x ), σR(V ) ( x ), λ R(V ) ( x ) >: y ∈ [ x ] R },

where,
0 1
μ R (V ) ( x ) = μV ( y ), μ R (V ) ( x ) = μV ( y ),
y∈[ x ] R y∈[ x ] R
0 1
σR(V ) ( x ) = σV (y), σR(V ) ( x ) = σV (y),
y∈[ x ] R y∈[ x ] R
1 0
λ R (V ) ( x ) = λV ( y ), λ R (V ) ( x ) = λV ( y ).
y∈[ x ] R y∈[ x ] R

149
Axioms 2018, 7, 5

Let E∗ ⊆ V ∗ × V ∗ and S an equivalence relation on E∗ such that

(( x1 , x2 ), (y1 , y2 )) ∈ S ⇔ ( x1 , y1 ), ( x2 , y2 ) ∈ R.

Let E be a neutrosophic set on E∗ ⊆ V ∗ × V ∗ defined as

E = {< xy, μ E ( xy), σE ( xy), λ E ( xy) >: xy ∈ V ∗ × V ∗ },

such that

μ E ( xy) ≤ min{μ RV ( x ), μ RV (y)},


σE ( xy) ≤ min{σRV ( x ), σRV (y)},
λ E ( xy) ≤ max{λ RV ( x ), λ RV (y)} ∀ x, y ∈ V ∗ .

Then, the lower and upper approximations of E represented by SE and SE, respectively, are defined
as follows

SE = {< xy, μSE ( xy), σSE ( xy), λSE ( xy) >: wz ∈ [ xy]S , xy ∈ V ∗ × V ∗ },
SE = {< xy, μSE ( xy), σSE ( xy), λSE ( xy) >: wz ∈ [ xy]S , xy ∈ V ∗ × V ∗ },

where,
0 1
μS( E) ( xy) = μ E (wz), μS( E) ( xy) = μ E (wz),
wz∈[ xy]S wz∈[ xy]S
0 1
σS( E) ( xy) = σE (wz), σS( E) ( xy) = σE (wz),
wz∈[ xy]S wz∈[ xy]S
1 0
λS( E) ( xy) = λ E (wz), λS( E) ( xy) = λ E (wz).
wz∈[ xy]S wz∈[ xy]S

A pair SE = (SE, SE) is called a rough neutrosophic relation.

Definition 5. A rough neutrosophic digraph on a nonempty set V ∗ is a four-ordered tuple G = ( R, RV, S, SE)
such that

(a) R is an equivalence relation on V ∗ ;


(b) S is an equivalence relation on E∗ ⊆ V ∗ × V ∗ ;
(c) RV = ( RV, RV ) is a rough neutrosophic set on V ∗ ;
(d) SE = (SE, SE) is a rough neutrosophic relation on V ∗ and
(e) ( RV, SE) is a neutrosophic digraph where G = ( RV, SE) and G = ( RV, SE) are lower and upper
approximate neutrosophic digraphs of G such that

μSE ( xy) ≤ min{μ RV ( x ), μ RV (y)},

σSE ( xy) ≤ min{σRV ( x ), σRV (y)},

λSE ( xy) ≤ max{λ RV ( x ), λ RV (y)},

and

μSE ( xy) ≤ min{μ RV ( x ), μ RV (y)},


σSE ( xy) ≤ min{σRV ( x ), σRV (y)},
λSE ( xy) ≤ max{λ RV ( x ), λ RV (y)} ∀ x, y ∈ V ∗ .

Example 1. Let V ∗ = { a, b, c} be a set and R an equivalence relation on V ∗


⎡ ⎤
1 0 1
⎢ ⎥
R = ⎣ 0 1 0 ⎦.
1 0 1

150
Axioms 2018, 7, 5

Let V = {( a, 0.2, 0.3, 0.6), (b, 0.8, 0.6, 0.5), (c, 0.9, 0.1, 0.4)} be a neutrosophic set on V ∗ . The lower and
upper approximations of V are given by,

RV = {( a, 0.2, 0.1, 0.6), (b, 0.8, 0.6, 0.5), (c, 0.2, 0.1, 0.6)},
RV = {( a, 0.9, 0.3, 0.4), (b, 0.8, 0.6, 0.5), (c, 0.9, 0.3, 0.4)}.

Let E∗ = { aa, ab, ac, bb, ca, cb} ⊆ V ∗ × V ∗ and S an equivalence relation on E∗ defined as:
⎡ ⎤
1 0 1 0 1 0
⎢ 0 1 0 0 0 1 ⎥
⎢ ⎥
⎢ ⎥
⎢ 1 0 1 0 1 0 ⎥
S=⎢ ⎥.
⎢ 0 0 0 1 0 0 ⎥
⎢ ⎥
⎣ 1 0 1 0 1 0 ⎦
0 1 0 0 0 1

Let E = {( aa, 0.2, 0.1, 0.4), ( ab, 0.2, 0.1, 0.5), ( ac, 0.1, 0.1, 0.5), (bb, 0.7, 0.5, 0.5), (ca, 0.1, 0.1, 0.3),
(cb, 0.2, 0.1, 0.5)} be a neutrosophic set on E∗ and SE = (SE, SE) a rough neutrosophic relation where SE and
SE are given as

SE ={( aa, 0.1, 0.1, 0.5), ( ab, 0.2, 0.1, 0.5), ( ac, 0.1, 0.1, 0.5), (bb, 0.7, 0.5, 0.5),
(ca, 0.1, 0.1, 0.5), (cb, 0.2, 0.1, 0.5)},
SE ={( aa, 0.2, 0.1, 0.3), ( ab, 0.2, 0.1, 0.5), ( ac, 0.2, 0.1, 0.3), (bb, 0.7, 0.5, 0.5),
(ca, 0.2, 0.1, 0.3), (cb, 0.2, 0.1, 0.5)}.

Thus, G = ( RV, SE) and G = ( RV, SE) are neutrosophic digraphs as shown in Figure 1.

Figure 1. Rough neutrosophic digraph G = ( G, G ).

We now form new rough neutrosophic digraphs from old ones.

Definition 6. Let G1 = ( G1 , G1 ) and G2 = ( G2 , G2 ) be two rough neutrosophic digraphs on a set V ∗ .


Then, the intersection of G1 and G2 is a rough neutrosophic digraph G = G1  G2 = ( G1 ∩ G2 , G1 ∩ G2 ),
where G1 ∩ G2 = ( RV1 ∩ RV2 , SE1 ∩ SE2 ) and G1 ∩ G2 = ( RV1 ∩ RV2 , SE1 ∩ SE2 ) are neutrosophic
digraphs, respectively, such that

151
Axioms 2018, 7, 5

(1) μ RV1 ∩ RV2 ( x ) = min{μ RV1 ( x ), μ RV2 ( x )},


σRV1 ∩ RV2 ( x ) = min{σRV1 ( x ), σRV2 ( x )},
λ RV1 ∩ RV2 ( x ) = max{λ RV1 ( x ), λ RV2 ( x )} ∀ x ∈ RV1 ∩ RV1 ,
μSE1 ∩SE2 ( xy) = min{μSE1 ( x ), μSE2 (y)},
σSE1 ∩SE2 ( xy) = min{σSE1 ( x ), σSE2 (y)}
λSE1 ∩SE2 ( xy) = max{λSE1 ( x ), λSE2 (y)} ∀ xy ∈ SE1 ∩ SE2 ,

(2) μ RV1 ∩ RV2 ( x ) = min{μ RV1 ( x ), μ RV2 ( x )},


σRV1 ∩ RV2 ( x ) = min{σRV1 ( x ), σRV2 ( x )},
λ RV1 ∩ RV2 ( x ) = max{λ RV1 ( x ), λ RV2 ( x )} ∀ x ∈ RV1 ∩ RV2 ,
μSE
1 ∩ SE2
( xy) = min{μSE1 ( x ), μSE2 (y)}
σSE
1 ∩ SE2
( xy) = min{σSE1 ( x ), σSE2 (y)}
λSE
1 ∩ SE2
( xy) = max{λSE1 ( x ), λSE2 (y)} ∀ xy ∈ SE1 ∩ SE2 .

Example 2. Consider the two rough neutrosophic digraphs G1 and G2 as shown in Figures 1 and 2. The
intersection of G1 and G2 is G = G1  G2 = ( G1 ∩ G2 , G1 ∩ G2 ) where G1 ∩ G2 = ( RV1 ∩ RV2 , SE1 ∩ SE2 )
and G1 ∩ G2 = ( RV1 ∩ RV2 , SE1 ∩ SE2 ) are neutrosophic digraphs as shown in Figure 3.

Figure 2. Rough neutrosophic digraph G = ( G, G ).

Figure 3. Rough neutrosophic digraph G1  G2 = ( G1 ∩ G2 , G1 ∩ G2 ).

Theorem 1. The intersection of two rough neutrosophic digraphs is a rough neutrosophic digraph.

Proof. Let G1 = ( G1 , G1 ) and G2 = ( G2 , G2 ) be two rough neutrosophic digraphs. Let G = G1  G2 =


( G1 ∩ G2 , G1 ∩ G2 ) be the intersection of G1 and G2 , where G1 ∩ G2 = ( RV1 ∩ RV2 , SE1 ∩, SE2 ) and
G1 ∩ G2 = ( RV1 ∩ RV2 , SE1 ∩ SE2 ). To prove that G = G1  G2 is a rough neutrosophic digraph, it is

152
Axioms 2018, 7, 5

enough to show that SE1 ∩ SE2 a nd SE1 ∩ SE2 are neutrosophic relation on RV1 ∩ RV2 and RV1 ∩ RV2 ,
respectively. First, we show that SE1 ∩ SE2 is a neutrosophic relation on RV1 ∩ RV2 .

μSE1 ∩SE2 ( xy) = μSE1 ( xy) ∧ μSE2 ( xy)


≤ (μ RV1 ( x ) ∧ μ RV2 (y)) ∧ (μ RV1 ( x ) ∧ μ RV2 (y))
= (μ RV1 ( x ) ∧ μ RV2 ( x )) ∧ (μ RV1 (y) ∧ μ RV2 (y)
= μ RV1 ∩ RV2 ( x ) ∧ μ RV1 ∩ RV2 (y)
μSE1 ∩SE2 ( xy) ≤ min{μ RV1 ∩ RV2 ( x ), μ RV1 ∩ RV2 (y)}
σSE1 ∩SE2 ( xy) = σSE1 ( xy) ∧ σSE2 ( xy)
≤ (σRV1 ( x ) ∧ σRV2 (y)) ∧ (σRV1 ( x ) ∧ σRV2 (y))
= (σRV1 ( x ) ∧ σRV2 ( x )) ∧ (σRV1 (y) ∧ σRV2 (y)
= σRV1 ∩ RV2 ( x ) ∧ σRV1 ∩ RV2 (y)
σSE1 ∩SE2 ( xy) ≤ min{σRV1 ∩ RV2 ( x ), σRV1 ∩ RV2 (y)}
λSE1 ∩SE2 ( xy) = λSE1 ( xy) ∧ λSE2 ( xy)
≤ (λ RV1 ( x ) ∨ λ RV2 (y)) ∧ (λ RV1 ( x ) ∨ λ RV2 (y))
= (λ RV1 ( x ) ∧ λ RV2 ( x )) ∨ (λ RV1 (y) ∧ λ RV2 (y)
= λ RV1 ∩ RV2 ( x ) ∨ λ RV1 ∩ RV2 (y)
λSE1 ∩SE2 ( xy) ≤ max{λ RV1 ∩ RV2 ( x ), λ RV1 ∩ RV2 (y)}.

Thus, from above it is clear that SE1 ∩ SE2 is a neutrosophic relation on RV1 ∩ RV2 .
Similarly, we can show that SE1 ∩ SE2 is a neutrosophic relation on RV1 ∩ RV2 . Hence, G is a
rough neutrosophic digraph.

Definition 7. The Cartesian product of two neutrosophic digraphs G1 and G2 is a rough neutrosophic digraph
G = G1  G2 = ( G1  G2 , G1  G2 ), where G1  G2 = ( R1  R2 , SE1  SE2 and G1  G2 = ( RV1 
RV2 , SE1  SE2 ) such that

(1) μ RV1 RV2 ( x1 , x2 ) = min{μ RV1 ( x1 ), μ RV2 ( x2 )},


σRV1 RV2 ( x1 , x2 ) = min{σRV1 ( x1 ), μ RV2 ( x2 )},
λ RV1 RV2 ( x1 , x2 ) = max{λ RV1 ( x1 ), μ RV2 ( x2 )}, ∀ ( x1 , x2 ) ∈ RV1  RV2 ,
μSE1 SE2 ( x, x2 )( x, y2 ) = min{μ RV1 ( x ), μSE2 ( x2 , y2 )},
σSE1 SE2 ( x, x2 )( x, y2 ) = min{σRV1 ( x ), σSE2 ( x2 , y2 )},
λSE1 SE2 ( x, x2 )( x, y2 ) = max{λ RV1 ( x ), λSE2 ( x2 , y2 )} ∀ x ∈ RV1 , x2 y2 ∈ SE2 ,
μSE1 SE2 ( x1 , z)(y1 , z) = min{μSE1 ( x1 , y1 ), μ RV2 (z)},
σSE1 SE2 ( x1 , z)(y1 , z) = min{σSE1 ( x1 , y1 ), σRV2 (z)},
λSE1 SE2 ( x1 , z)(y1 , z) = max{λSE1 ( x1 , y1 ), λ RV2 (z)} ∀ x1 y1 ∈ SE1 , z ∈ RV2 ,

(2) μ RV1 RV2 ( x1 , x2 ) = min{μ RV1 ( x1 ), μ RV2 ( x2 )},


σRV1 RV2 ( x1 , x2 ) = min{σRV1 ( x1 ), μ RV2 ( x2 )},
λ RV1 RV2 ( x1 , x2 ) = max{λ RV1 ( x1 ), μ RV2 ( x2 )} ∀ ( x1 , x2 ) ∈ RV1  RV2 ,
μSE
1 SE2
( x, x2 )( x, y2 ) = min{μ RV1 ( x ), μSE2 ( x2 , y2 )},
σSE
1 SE2
( x, x2 )( x, y2 ) = min{σRV1 ( x ), σSE2 ( x2 , y2 )},
λSE
1 SE2
( x, x2 )( x, y2 ) = max{λ RV1 ( x ), λSE2 ( x2 , y2 )} ∀ x ∈ RV1 , x2 y2 ∈ SE2 ,

153
Axioms 2018, 7, 5

μSE
1 SE2
( x1 , z)(y1 , z) = min{μSE1 ( x1 , y1 ), μ RV2 (z)},
σSE
1 SE2
( x1 , z)(y1 , z) = min{σSE1 ( x1 , y1 ), σRV2 (z)},
λSE
1 SE2
( x1 , z)(y1 , z) = max{λSE1 ( x1 , y1 ), λ RV2 (z)} ∀ x1 y1 ∈ SE1 , z ∈ RV2 ,

Example 3. Let V ∗ = { a, b, c, d} be a set. Let G1 = ( G1 , G1 ) and G2 = ( G2 , G2 ) be two rough neutrosophic


digraphs on V ∗ , as shown in Figures 4 and 5. The cartesian product of G1 and G2 is G = ( G1 × G2 , G1 × G2 ),
where G1 × G2 = ( RN1 × RN2 , SE1 × SE2 ) and G1 × G2 = ( RN1 × RN2 , SE1 × SE2 ) are neutrosophic
digraphs, as shown in Figures 6 and 7, respectively.

Figure 4. Rough neutrosophic digraph G1 = ( G1 , G1 ).

Figure 5. Rough neutrosophic digraph G2 = ( G2 , G2 ).

154
Axioms 2018, 7, 5

Figure 6. Neutrosophic digraph G1 × G2 = ( RN1 × RN2 , SE1 × SE2 ).

Figure 7. Neutrosophic digraph G1 × G2 = ( RN1 × RN2 , SE1 × SE2 ).

Theorem 2. The Cartesian product of two rough neutrosophic digraphs is a rough neutrosophic digraph.

Proof. Let G1 = ( G1 , G1 ) and G2 = ( G2 , G2 ) be two rough neutrosophic digraphs. Let G = G1  G2 =


( G1  G2 , G1  G2 ) be the Cartesian product of G1 and G2 , where G1  G2 = ( RV1  RV2 , SE1  SE2 )
and G1  G2 = ( RV1  RV2 , SE1  SE2 ). To prove that G = G1  G2 is a rough neutrosophic digraph,
it is enough to show that SE1  SE2 and SE1  SE2 are neutrosophic relation on RV1  RV2 and
RV1  RV2 , respectively. First, we show that SE1  SE2 is a neutrosophic relation on RV1  RV2 .

155
Axioms 2018, 7, 5

If x ∈ RV1 , x2 y2 ∈ SE2 , then

μSE1 SE2 ( x, x2 )( x, y2 ) = μ RV1 ( x ) ∧ μSE2 ( x2 , y2 )


≤ μ RV1 ( x ) ∧ (μ RV2 ( x2 ) ∧ μ RV2 (y2 ))
= (μ RV1 ( x ) ∧ μ RV2 ( x2 )) ∧ (μ RV1 ( x ) ∧ μ RV2 (y2 ))
= μ RV1 RV2 ( x, x2 ) ∧ μ RV1 RV2 ( x, y2 )
μSE1 SE2 ( x, x2 )( x, y2 ) ≤ min{μ RV1 RV2 ( x, x2 ), μ RV1 RV2 ( x, y2 )},
σSE1 SE2 ( x, x2 )( x, y2 ) = σRV1 ( x ) ∧ σSE2 ( x2 , y2 )
≤ σRV1 ( x ) ∧ (σRV2 ( x2 ) ∧ σRV2 (y2 ))
= (σRV1 ( x ) ∧ σRV2 ( x2 )) ∧ (σRV1 ( x ) ∧ σRV2 (y2 )
= σRV1 RV2 ( x, x2 ) ∧ σRV1 RV2 ( x, y2 )
σSE1 SE2 ( x, x2 )( x, y2 ) ≤ min{σRV1 RV2 ( x, x2 ), σRV1 RV2 ( x, y2 )},
λSE1 SE2 ( x, x2 )( x, y2 ) = λ RV1 ( x ) ∨ λSE2 ( x2 , y2 )
≤ λ RV1 ( x ) ∨ (λ RV2 ( x2 ) ∨ λ RV2 (y2 ))
= (λ RV1 ( x ) ∨ λ RV2 ( x2 )) ∨ (λ RV1 ( x ) ∨ λ RV2 (y2 ))
= λ RV1 RV2 ( x, x2 ) ∨ λ RV1 RV2 ( x, y2 )
λSE1 SE2 ( x, x2 , x, y2 ) ≤ max{λ RV1 RV2 ( x, x2 ), λ RV1 RV2 ( x, y2 )}.

If x1 y1 ∈ SE1 , z ∈ RV2 , then

μSE1 SE2 ( x1 , z)(y1 , z) = μSE1 ( x1 , y1 ) ∧ μ RV2 (z)


≤ (μ RV1 ( x1 ) ∧ μ RV1 (y1 )) ∧ μ RV2 (z)
= (μ RV1 ( x1 ) ∧ μ RV2 (z)) ∧ (μ RV1 (y1 ) ∧ μ RV2 (z))
= μ RV1 RV2 ( x1 , z) ∧ μ RV1 RV2 (y1 , z)
μSE1 SE2 ( x1 , z)(y1 , z) ≤ min{μ RV1 RV2 ( x1 , z), μ RV1 RV2 (y1 , z)},
σSE1 SE2 ( x1 , z)(y1 , z) = σSE1 ( x1 , y1 ) ∧ σRV2 (z)
≤ (σRV1 ( x1 ) ∧ σRV1 (y1 )) ∧ σRV2 (z)
= (σRV1 ( x1 ) ∧ σRV2 (z)) ∧ (σRV1 (y1 ) ∧ σRV2 (z))
= σRV1 RV2 ( x1 , z) ∧ σRV1 RV2 (y1 , z)
σSE1 SE2 ( x1 , z)(y1 , z) ≤ min{σRV1 RV2 ( x1 , z), σRV1 RV2 (y1 , z)},
λSE1 SE2 ( x1 , z)(y1 , z) = λSE1 ( x1 , y1 ) ∨ λ RV2 (z)
≤ (λ RV1 ( x1 ) ∨ λ RV1 (y1 )) ∨ λ RV2 (z)
= (λ RV1 ( x1 ) ∨ λ RV2 (z)) ∨ (λ RV1 (y1 ) ∨ λ RV2 (z))
= λ RV1 RV2 ( x1 , z) ∨ λ RV1 RV2 (y1 , z)
λSE1 SE2 ( x1 , z)(y1 , z) ≤ max{λ RV1 RV2 ( x1 , z), λ RV1 RV2 (y1 , z)}.

Thus, from above, it is clear that SE1  SE2 is a neutrosophic relation on RV1  RV2 .
Similarly, we can show that SE1  SE2 is a neutrosophic relation on RV1  RV2 . Hence,
G = ( G1  G2 , G1  G2 ) is a rough neutrosophic digraph.

Definition 8. The composition of two rough neutrosophic digraphs G1 and G2 is a rough neutrosophic digraph
G = G1 ◦ G2 = ( G1 ◦ G2 , G1 ◦ G2 ), where G1 ◦ G2 = ( RV1 ◦ RV2 , SE1 ◦ SE2 ) and G1 ◦ G2 = ( RV1 ◦
RV2 , SE1 ◦ SE2 ) are neutrosophic digraphs, respectively, such that

(1) μ RV1 ◦ RV2 ( x1 , x2 ) = min{μ RV1 ( x1 ), μ RV2 ( x2 )},

156
Axioms 2018, 7, 5

σRV1 ◦ RV2 ( x1 , x2 ) = min{σRV1 ( x1 ), μ RV2 ( x2 )},


λ RV1 ◦ RV2 ( x1 , x2 ) = max{λ RV1 ( x1 ), μ RV2 ( x2 )} ∀ ( x1 , x2 ) ∈ RV1 × RV2 ,
μSE1 ◦SE2 ( x, x2 )( x, y2 ) = min{μ RV1 ( x ), μSE2 ( x2 , y2 )},
σSE1 ◦SE2 ( x, x2 )( x, y2 ) = min{σRV1 ( x ), σSE2 ( x2 , y2 )},
λSE1 ◦SE2 ( x, x2 )( x, y2 ) = max{λ RV1 ( x ), λSE2 ( x2 , y2 )} ∀ x ∈ RV1 , x2 y2 ∈ SE2 ,
μSE1 ◦SE2 ( x1 , z)(y1 , z) = min{μSE1 ( x1 , y1 ), μ RV2 (z)},
σSE1 ◦SE2 ( x1 , z)(y1 , z) = min{σSE1 ( x1 , y1 ), σRV2 (z)},
λSE1 ◦SE2 ( x1 , z)(y1 , z) = max{λSE1 ( x1 , y1 ), λ RV2 (z)} ∀ x1 y1 ∈ SE1 , z ∈ RV2 ,
μSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) = min{μSE1 ( x1 , y1 ), μ RV2 ( x2 ), μ RV2 (y2 )},
σSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) = min{σSE1 ( x1 , y1 ), σRV2 ( x2 ), σRV2 (y2 )},
λSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) = max{λSE1 ( x1 , y1 ), λ RV2 ( x2 ), λ RV2 (y2 )}
∀ x1 y1 ∈ SE1 , x2 , y2 ∈ RV2 , x2 = y2 .
(2) μ RV1 ◦ RV2 ( x1 , x2 ) = min{μ RV1 ( x1 ), μ RV2 ( x2 )},
σRV1 ◦ RV2 ( x1 , x2 ) = min{σRV1 ( x1 ), μ RV2 ( x2 )},
λ RV1 ◦ RV2 ( x1 , x2 ) = max{λ RV1 ( x1 ), μ RV2 ( x2 )} ∀ ( x1 , x2 ) ∈ RV1 × RV2 ,
μSE
1 ◦ SE2
( x, x2 )( x, y2 ) = min{μ RV1 ( x ), μSE2 ( x2 , y2 )},
σSE
1 ◦ SE2
( x, x2 )( x, y2 ) = min{σRV1 ( x ), σSE2 ( x2 , y2 )},
λSE
1 ◦ SE2
( x, x2 )( x, y2 ) = max{λ RV1 ( x ), λSE2 ( x2 , y2 )} ∀ x ∈ RV1 , x2 y2 ∈ SE2 ,
μSE
1 ◦ SE2
( x1 , z)(y1 , z) = min{μSE1 ( x1 , y1 ), μ RV2 (z)},
σSE
1 ◦ SE2
( x1 , z)(y1 , z) = min{σSE1 ( x1 , y1 ), σRV2 (z)},
λSE
1 ◦ SE2
( x1 , z)(y1 , z) = max{λSE1 ( x1 , y1 ), λ RV2 (z)} ∀ x1 y1 ∈ SE1 , z ∈ RV2 ,
μSE
1 ◦ SE2
( x1 , x2 )(y1 , y2 ) = min{μSE1 ( x1 , y1 ), μ RV2 ( x2 ), μ RV2 (y2 )},
σSE
1 ◦ SE2
( x1 , x2 )(y1 , y2 ) = min{σSE1 ( x1 , y1 ), σRV2 ( x2 ), σRV2 (y2 )},
λSE
1 ◦ SE2
( x1 , x2 )(y1 , y2 ) = max{λSE1 ( x1 , y1 ), λ RV2 ( x2 ), λ RV2 (y2 )}
∀ x1 y1 ∈ SE1 , x2 , y2 ∈ RV2 , x2 = y2

Example 4. Let V ∗ = { p, q, r } be a set. Let G1 = ( G1 , G1 ) and G2 = ( G2 , G2 ) be two RND on V ∗ ,


where G1 = ( RV1 , SE1 ) and G1 = ( RV1 , SE1 ) are ND, as shown in Figure 8. G2 = ( RV2 , SE2 ) and
G2 = ( RV2 , SE2 ) are also ND, as shown in Figure 9.
The composition of G1 and G2 is G = G1 ◦ G2 = ( G1 ◦ G2 , G1 ◦ G2 ) where G1 ◦ G2 = ( RV1 ◦ RV2 , SE1 ◦
SE2 ) and G1 ◦ G2 = ( RV1 ◦ RV2 , SE1 ◦ SE2 ) are NDs, as shown in Figures 10 and 11.

157
Axioms 2018, 7, 5

Figure 8. Rough neutrosophic digraph G1 = ( G1 , G1 ).

Figure 9. Rough neutrosophic digraph G2 = ( G2 , G2 ).

Figure 10. Neutrosophic digraph G1 ◦ G2 = ( RV1 ◦ RV2 , SE1 ◦ SE2 ).

158
Axioms 2018, 7, 5

Figure 11. Neutrosophic digraph G1 ◦ G2 = ( RV1 ◦ RV2 , SE1 ◦ SE2 ).

Theorem 3. The Composition of two rough neutrosophic digraphs is a rough neutrosophic digraph.

Proof. Let G1 = ( G1 , G1 ) and G2 = ( G2 , G2 ) be two rough neutrosophic digraphs. Let G = G1 ◦ G2 =


( G1 ◦ G2 , G1 ◦ G2 ) be the Composition of G1 and G2 , where G1 ◦ G2 = ( RV1 ◦ RV2 , SE1 ◦ SE2 ) and
G1 ◦ G2 = ( RV1 ◦ RV2 , SE1 ◦ SE2 ). To prove that G = G1 ◦ G2 is a rough neutrosophic digraph, it is
enough to show that SE1 ◦ SE2 and SE1 ◦ SE2 are neutrosophic relations on RV1 ◦ RV2 and RV1 ◦ RV2 ,
respectively. First, we show that SE1 ◦ SE2 is a neutrosophic relation on RV1 ◦ RV2 .
If x ∈ RV1 , x2 y2 ∈ SE2 , then

μSE1 ◦SE2 ( x, x2 )( x, y2 ) = μ RV1 ( x ) ∧ μSE2 ( x2 , y2 )


≤ μ RV1 ( x ) ∧ (μ RV2 ( x2 ) ∧ μ RV2 (y2 ))
= (μ RV1 ( x ) ∧ μ RV2 ( x2 )) ∧ (μ RV1 ( x ) ∧ μ RV2 (y2 ))
= μ RV1 ◦ RV2 ( x, x2 ) ∧ μ RV1 ◦ RV2 ( x, y2 )
μSE1 ◦SE2 ( x, x2 )( x, y2 ) ≤ min{μ RV1 ◦ RV2 ( x, x2 ), μ RV1 ◦ RV2 ( x, y2 )},
σSE1 ◦SE2 ( x, x2 )( x, y2 ) = σRV1 ( x ) ∧ σSE2 ( x2 , y2 )
≤ σRV1 ( x ) ∧ (σRV2 ( x2 ) ∧ σRV2 (y2 ))
= (σRV1 ( x ) ∧ σRV2 ( x2 )) ∧ (σRV1 ( x ) ∧ σRV2 (y2 )
= σRV1 ◦ RV2 ( x, x2 ) ∧ σRV1 ◦ RV2 ( x, y2 )
σSE1 ◦SE2 ( x, x2 )( x, y2 ) ≤ min{σRV1 ◦ RV2 ( x, x2 ), σRV1 ◦ RV2 ( x, y2 )},
λSE1 ◦SE2 ( x, x2 )( x, y2 ) = λ RV1 ( x ) ∨ λSE2 ( x2 , y2 )
≤ λ RV1 ( x ) ∨ (λ RV2 ( x2 ) ∨ λ RV2 (y2 ))
= (λ RV1 ( x ) ∨ λ RV2 ( x2 )) ∨ (λ RV1 ( x ) ∨ λ RV2 (y2 ))
= λ RV1 ◦ RV2 ( x, x2 ) ∨ λ RV1 ◦ RV2 ( x, y2 )
λSE1 ◦SE2 ( x, x2 , x, y2 ) ≤ max{λ RV1 ◦ RV2 ( x, x2 ), λ RV1 ◦ RV2 ( x, y2 )}.

If x1 y1 ∈ SE1 , z ∈ RV2 , then

μSE1 ◦SE2 ( x1 , z)(y1 , z) = μSE1 ( x1 , y1 ) ∧ μ RV2 (z)


≤ (μ RV1 ( x1 ) ∧ μ RV1 (y1 )) ∧ μ RV2 (z)
= (μ RV1 ( x1 ) ∧ μ RV2 (z)) ∧ (μ RV1 (y1 ) ∧ μ RV2 (z))
= μ RV1 ◦ RV2 ( x1 , z) ∧ μ RV1 ◦ RV2 (y1 , z)
μSE1 ◦SE2 ( x1 , z)(y1 , z) ≤ min{μ RV1 ◦ RV2 ( x1 , z), μ RV1 ◦ RV2 (y1 , z)},

159
Axioms 2018, 7, 5

σSE1 ◦SE2 ( x1 , z)(y1 , z) = σSE1 ( x1 , y1 ) ∧ σRV2 (z)


≤ (σRV1 ( x1 ) ∧ σRV1 (y1 )) ∧ σRV2 (z)
= (σRV1 ( x1 ) ∧ σRV2 (z)) ∧ (σRV1 (y1 ) ∧ σRV2 (z))
= σRV1 ◦ RV2 ( x1 , z) ∧ σRV1 ◦ RV2 (y1 , z)
σSE1 ◦SE2 ( x1 , z)(y1 , z) ≤ min{σRV1 ◦ RV2 ( x1 , z), σRV1 ◦ RV2 (y1 , z)},
λSE1 ◦SE2 ( x1 , z)(y1 , z) = λSE1 ( x1 , y1 ) ∨ λ RV2 (z)
≤ (λ RV1 ( x1 ) ∨ λ RV1 (y1 )) ∨ λ RV2 (z)
= (λ RV1 ( x1 ) ∨ λ RV2 (z)) ∨ (λ RV1 (y1 ) ∨ λ RV2 (z))
= λ RV1 ◦ RV2 ( x1 , z) ∨ λ RV1 ◦ RV2 (y1 , z)
λSE1 ◦SE2 ( x1 , z)(y1 , z) ≤ max{λ RV1 ◦ RV2 ( x1 , z), λ RV1 ◦ RV2 (y1 , z)}.

If x1 y1 ∈ SE1 , x2 , y2 ∈ RV2 such that x2 = y2 ,

μSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) = μSE1 ( x1 y1 ) ∧ μ RV2 ( x2 ) ∧ μ RV2 (y2 )


≤ (μ RV1 ( x1 ) ∧ μ RV1 (y1 )) ∧ μ RV2 ( x2 ) ∧ μ RV2 (y2 )
= (μ RV1 ( x1 ) ∧ μ RV2 ( x2 )) ∧ (μ RV1 (y1 )) ∧ μ RV2 (y2 ))
= μ RV1 ◦ RV2 ( x1 , x2 ) ∧ μ RV1 ◦ RV2 (y1 , y2 )
μSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) ≤ min{μ RV1 ◦ RV2 ( x1 , x2 ), μ RV1 ◦ RV2 (y1 , y2 )}
σSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) = σSE1 ( x1 y1 ) ∧ σRV2 ( x2 ) ∧ σRV2 (y2 )
≤ (σRV1 ( x1 ) ∧ σRV1 (y1 )) ∧ σRV2 ( x2 ) ∧ σRV2 (y2 )
= (σRV1 ( x1 ) ∧ σRV2 ( x2 )) ∧ (σRV1 (y1 )) ∧ σRV2 (y2 ))
= σRV1 ◦ RV2 ( x1 , x2 ) ∧ σRV1 ◦ RV2 (y1 , y2 )
σSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) ≤ min{σRV1 ◦ RV2 ( x1 , x2 ), σRV1 ◦ RV2 (y1 , y2 )}
λSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) = λSE1 ( x1 y1 ) ∨ λ RV2 ( x2 ) ∨ λ RV2 (y2 )
≤ (λ RV1 ( x1 ) ∨ λ RV1 (y1 )) ∨ λ RV2 ( x2 ) ∨ λ RV2 (y2 )
= (λ RV1 ( x1 ) ∨ λ RV2 ( x2 )) ∨ (λ RV1 (y1 )) ∨ λ RV2 (y2 ))
= λ RV1 ◦ RV2 ( x1 , x2 ) ∨ λ RV1 ◦ RV2 (y1 , y2 )
λSE1 ◦SE2 ( x1 , x2 )(y1 , y2 ) ≤ max{λ RV1 ◦ RV2 ( x1 , x2 ), λ RV1 ◦ RV2 (y1 , y2 )}.

Thus, from above, it is clear that SE1 ◦ SE2 is a neutrosophic relation on RV1 ◦ RV2 .
Similarly, we can show that SE1 ◦ SE2 is a neutrosophic relation on RV1 ◦ RV2 . Hence,
G = (G1 ◦ G2 , G1 ◦ G2 ) is a rough neutrosophic digraph.

Definition 9. Let G = ( G, G ) be a RND. The complement of G, denoted by G  = ( G  , G ) is a rough

neutrosophic digraph, where G  = (( RV ) , (SE) ) and G = (( RV ) , (SE) ) are neutrosophic digraph such that

(1) μ( RV ) ( x ) = μ RV ( x ),
σ( RV ) ( x ) = σRV ( x ),
λ( RV ) ( x ) = λ RV ( x ) ∀ x ∈ V ∗
μ(SE) ( x, y) = min{μ RV ( x ), μ RV (y)} − μSE ( xy)
σ(SE) ( x, y) = min{σRV ( x ), σRV (y)} − σSE ( xy)
λ(SE) ( x, y) = max{λ RV ( x ), λ RV (y)} − λSE ( xy) ∀ x, y ∈ V ∗ .

160
Axioms 2018, 7, 5

(2) μ RV  ( x ) = μ RV ( x ),
σRV  ( x ) = σRV ( x ),
λ RV  ( x ) = λ RV ( x ), ∀ x ∈ V ∗
μ(SE) ( x, y) = min{μ RV ( x ), μ RV (y)} − μSE ( xy)
σ(SE) ( x, y) = min{σRV ( x ), σRV (y)} − σSE ( xy)
λ(SE) ( x, y) = max{λ RV ( x ), λ RV (y)} − λSE ( xy) ∀ x, y ∈ V ∗ .

Example 5. Consider a rough neutrosophic digraph as shown in Figure 4. The lower and upper approximations
of graph G are G = ( RV, SE) and G = ( RV, SE), respectively, where

RV = {( a, 0.2, 0.4, 0.6), (b, 0.2, 0.4, 0.6), (c, 0.2, 0.5, 0.9), (d, 0.2, 0.5, 0.9)},
RV = {( a, 0.3, 0.8, 0.3).(b, 0.3, 0.8, 0.3), (c, 0.5, 0.6, 0.8), (d, 0.5, 0.6, 0.8)},

SE = {( aa, 0.2, 0.3, 0.3), ( ab, 0.2, 0.3, 0.3), ( ad, 0.1, 0.3, 0.8), (bc, 0.1, 0.3, 0.8),
(bd, 0.1, 0.3, 0.8), (dc, 0.2, 0.4, 0.7), (dd, 0.2, 0.4, 0.7)},
SE = {( aa, 0.2, 0.4, 0.3), ( ab, 0.2, 0.4, 0.3), ( ad, 0.2, 0.4, 0.7), (bc, 0.2, 0.4, 0.7),
(bd, 0.2, 0.4, 0.7), (dc, 0.2, 0.4, 0.7), (dd, 0.2, 0.4, 0.7)}.

The complement of G is G  = ( G  , G ). By calculations, we have

( RV ) = {( a, 0.2, 0.4, 0.6), (b, 0.2, 0.4, 0.6), (c, 0.2, 0.5, 0.9), (d, 0.2, 0.5, 0.9)},
( RV ) = {( a, 0.3, 0.8, 0.3).(b, 0.3, 0.8, 0.3), (c, 0.5, 0.6, 0.8), (d, 0.5, 0.6, 0.8)},

(SE) = {( aa, 0, 0.1, 0.3), ( ab, 0, 0.1, 0.3), ( ac, 0.2, 0.4, 0.9), ( ad, 0.1, 0.1, 0.1), (ba, 0.2, 0.4, 0.6), (bb, 0.2, 0.4, 0.6),
(bc, 0.1, 0.1, 0.1), (bd, 0.1, 0.1, 0.1), (ca, 0.2, 0.4, 0.9), (cb, 0.2, 0.4, 0.9), (cc, 0.2, 0.5, 0.9), (cd, 0.2, 0.5, 0.9),
(da, 0.2, 0.4, 0.9), (db, 0.2, 0.4, 0.9), (dc, 0, 0.1, 0.2), (dd, 0, 0.1, 0.2)},
(SE) = {( aa, 0.1, 0.4, 0), ( ab, 0.1, 0.4, 0), ( ac, 0.3, 0.6, 0.8), ( ad, 0.1, 0.2, 0.1), (ba, 0.3, 0.8, 0.3), (bb, 0.3, 0.8, 0.3),
(bc, 0.1, 0.2, 0.1), (bd, 0.1, 0.2, 0.1), (ca, 0.3, 0.6, 0.8), (cb, 0.3, 0.6, 0.8), (cc, 0.5, 0.6, 0.8), (cd, 0.5, 0.6, 0.8),
(da, 0.3, 0.6, 0.8), (db, 0.3, 0.6, 0.8), (dc, 0.3, 0.2, 0.1), (dd, 0.3, 0.2, 0.1)}.


Thus, G  = (( RV ) , (SE) ) and G = (( RV ) , (SE) ) are neutrosophic digraph, as shown in Figure 12.


Figure 12. Rough neutrosophic digraph G  = ( G  , G ).

Definition 10. A rough neutrosophic digraph G = ( G, G ) is self complementary if G and G  are isomorphic,

that is, G ∼
= G  and G ∼
=G.

161
Axioms 2018, 7, 5

Example 6. Let V ∗ = { a, b, c} be a set and R an equivalence relation on V ∗ defined as:


⎡ ⎤
1 0 1
⎢ ⎥
R=⎣ 0 1 0 ⎦.
1 0 1

Let V = {( a, 0.2, 0.4, 0.8), (b, 0.2, 0.4, 0.8), (c, 0.4, 0.6, 0.4)} be a neutrosophic set on V ∗ . The lower and
upper approximations of V are given as,
RV = {( a, 0.2, 0.4, 0.8), (b, 0.2, 0.4, 0.8), (c, 0.2, 0.4, 0.8)},
RV = {( a, 0.4, 0.6, 0.4), (b, 0.2, 0.4, 0.8), (c, 0.4, 0.6, 0.4)}.
Let E∗ = { aa, ab, ac, ba} ⊆ V ∗ × V ∗ and S an equivalence relation on E∗ defined as
⎡ ⎤
1 0 1 0
⎢ ⎥
⎢ 0 1 0 0 ⎥
S=⎢ ⎥.
⎣ 1 0 1 0 ⎦
0 0 0 1

Let E = {( aa, 0.1, 0.3, 0.2), ( ab, 0.1, 0.2, 0.4), ( ac, 0.2, 0.2, 0.4), (ba, 0.1, 0.2, 0.4)} be a neutrosophic set
on E∗ and SE = (SE, SE) a RNR where SE and SE are given as
SE = {( aa, 0.1, 0.2, 0.4), ( ab, 0.1, 0.2, 0.4), ( ac, 0.1, 0.2, 0.4), (ba, 0.1, 0.2, 0.4)},
SE = {( aa, 0.2, 0.3, 0.2), ( ab, 0.1, 0.2, 0.4), ( ac, 0.2, 0.3, 0.2), (ba, 0.1, 0.2, 0.4)}.
Thus, G = ( RV, SE) and G = ( RV, SE) are neutrosophic digraphs, as shown in Figure 13.
 
The complement of G is G  = ( G  , G ), where G  = G and G = G are neutrosophic digraphs, as shown

in Figure 13, and it can be easily shown that G and G are isomorphic. Hence, G = ( G, G ) is a self
complementary RND.

Figure 13. Self complementary RND G = ( G, G ).

Theorem 4. Let G = ( G, G ) be a self complementary rough neutrosophic digraph. Then,

1
∑ ∗ μSE (wz) = 2 ∑ ∗ (μRV (w) ∧ μRV (z))
w,z∈V w,z∈V
1
∑ σSE (wz) = 2 ∑ ∗ (σRV (w) ∧ σRV (z))
w,z∈V ∗ w,z∈V
1
∑ λSE (wz) =
2 w,z∑
(λ RV (w) ∨ λ RV (z))
w,z∈V ∗ ∈V ∗
1
∑ μSE (wz) =
2 w,z∑
(μ RV (w) ∧ μ RV (z))
w,z∈V ∗ ∈V ∗
1
∑ ∗ σSE (wz) = 2 ∑ ∗ (σRV (w) ∧ σRV (z))
w,z∈V w,z∈V
1
∑ ∗ λSE (wz) = 2 ∑ ∗ (λRV (w) ∨ λRV (z)).
w,z∈V w,z∈V

162
Axioms 2018, 7, 5

Proof. Let G = ( G, G ) be a self complementary rough neutrosophic digraph. Then, there exist two
isomorphisms g : V ∗ −→ V ∗ and g : V ∗ −→ V ∗ , respectively, such that

μ( RV ) ( g(w)) = μ RV (w),
σ( RV ) ( g(w)) = σRV (w),
λ( RV ) ( g(w)) = λ RV (w), ∀ w ∈ V ∗
μ(SE) ( g(w) g(z)) = μ(SE) (wz),
σ(SE) ( g(w) g(z)) = σ(SE) (wz),
λ(SE) ( g(w) g(z)) = λ(SE) (wz) ∀ w, z ∈ V ∗ .

and

μ( RV ) ( g(w)) = μ RV (w),
σ( RV ) ( g(w)) = σRV (w),
λ( RV ) ( g(w)) = λ RV (w), ∀ w ∈ V ∗
μ(SE) ( g(w) g(z)) = μ(SE) (wz),
σ(SE) ( g(w) g(z)) = σ(SE) (wz),
λ(SE) ( g(w) g(z)) = λ(SE) (wz) ∀ w, z ∈ V ∗ .

By Definition 7, we have

μ(SE) ( g(w) g(z)) = (μ RV (w) ∧ μ RV (z)) − μ(SE) (wz)


μ(SE) (wz) = (μ RV (w) ∧ μ RV (z)) − μ(SE) (wz)
∑ μ(SE) (wz) = ∑ (μ RV (w) ∧ μ RV (z)) − ∑ μ(SE) (wz)
w,z∈V ∗ w,z∈V ∗ w,z∈V ∗
2 ∑ μ(SE) (wz) = ∑ (μ RV (w) ∧ μ RV (z))
w,z∈V ∗ w,z∈V ∗
1
∑ μ(SE) (wz) =
2 w,z∑
(μ RV (w) ∧ μ RV (z))
w,z∈V ∗ ∈V ∗

σ(SE) ( g(w) g(z)) = (σRV (w) ∧ σRV (z)) − σ(SE) (wz)


σ(SE) (wz) = (σRV (w) ∧ σRV (z)) − σ(SE) (wz)
∑ σ(SE) (wz) = ∑ (σRV (w) ∧ σRV (z)) − ∑ σ(SE) (wz)
w,z∈V ∗ w,z∈V ∗ w,z∈V ∗
2 ∑ σ(SE) (wz) = ∑ (σRV (w) ∧ σRV (z))
w,z∈V ∗ w,z∈V ∗
1
∑ ∗ σ(SE) (wz) =
2 w,z∑
(σRV (w) ∧ σRV (z))
w,z∈V ∈V ∗

λ(SE) ( g(w) g(z)) = (λ RV (w) ∨ λ RV (z)) − λ(SE) (wz)


λ(SE) (wz) = (λ RV (w) ∨ λ RV (z)) − λ(SE) (wz)
∑ λ(SE) (wz) = ∑ (λ RV (w) ∨ λ RV (z)) − ∑ λ(SE) (wz)
w,z∈V ∗ w,z∈V ∗ w,z∈V ∗
2 ∑ λ(SE) (wz) = ∑ (λ RV (w) ∨ λ RV (z))
w,z∈V ∗ w,z∈V ∗
1
∑ λ(SE) (wz) =
2 w,z∑
(λ RV (w) ∨ λ RV (z))
w,z∈V ∗ ∈V ∗

163
Axioms 2018, 7, 5

Similarly, it can be shown that

1
∑ ∗ μSE (wz) =
2 w,z∑
(μ RV (w) ∧ μ RV (z))
w,z∈V ∈V ∗
1
∑ ∗ σSE (wz) =
2 w,z∑
(σRV (w) ∧ σRV (z))
w,z∈V ∈V ∗
1
∑ ∗ λSE (wz) =
2 w,z∑
(λ RV (w) ∨ λ RV (z)).
w,z∈V ∈V ∗

This completes the proof.

3. Application
Investment is a very good way of getting profit and wisely invested money surely gives certain
profit. The most important factors that influence individual investment decision are: company’s
reputation, corporate earnings and price per share. In this application, we combine these factors into
one factor, i.e. company’s status in industry, to describe overall performance of the company. Let us
consider an individual Mr. Shahid who wants to invest his money. For this purpose, he considers some
private companies, which are Telecommunication company (TC), Carpenter company (CC), Real Estate
business (RE), Vehicle Leasing company (VL), Advertising company (AD), and Textile Testing company
(TT). Let V ∗ ={TC, CC, RE, VL, AD, TT } be a set. Let T be an equivalence relation defined on V ∗
as follows:
⎡ ⎤
1 0 1 0 1 0
⎢ 0 1 0 0 0 0 ⎥
⎢ ⎥
⎢ ⎥
⎢ 1 0 1 0 1 0 ⎥
T=⎢ ⎥.
⎢ 0 0 0 1 0 1 ⎥
⎢ ⎥
⎣ 1 0 1 0 1 0 ⎦
0 0 0 1 0 1
Let V = {( TC, 0.3, 0.4, 0.1), (CC, 0.8, 0.1, 0.5), ( RE, 0.1, 0.2, 0.6), (VL, 0.9, 0.6, 0.1), ( AD, 0.2, 0.5,
0.2), ( TT, 0.8, 0.6, 0.5)} be a neutrosophic set on V ∗ with three components corresponding to each
company, which represents its status in the industry and TV = ( TV, TV ) a rough neutrosophic set,
where TV and TV are lower and upper approximations of V, respectively, as follows:

TV = {( TC, 0.1, 0.2, 0.6), (CC, 0.8, 0.1, 0.5), ( RE, 0.1, 0.2, 0.6), (VL, 0.8, 0.6, 0.5), ( AD,
0.1, 0.2, 0.6), ( TT, 0.8, 0.6, 0.5)},
TV = {( TC, 0.3, 0.5, 0.1), (CC, 0.8, 0.1, 0.5), ( RE, 0.3, 0.5, 0.1), (VL, 0.9, 0.6, 0.1), ( AD,
0.3, 0.5, 0.1), ( TT, 0.9, 0.6, 0.1)}.
Let E∗ = {( TC, CC ), ( TC, AD ), ( TC, RE), (CC, VL), (CC, TT ), ( AD, RE), ( TT, VL)},

be the set of edges and S an equivalence relation on E∗ defined as follows:


⎡ ⎤
1 0 0 0 0 0 0
⎢ 0 1 1 0 0 1 0 ⎥
⎢ ⎥
⎢ ⎥
⎢ 0 1 1 0 0 1 0 ⎥
⎢ ⎥
S=⎢ 0 0 0 1 1 0 0 ⎥.
⎢ ⎥
⎢ 0 1 0 1 1 0 0 ⎥
⎢ ⎥
⎣ 0 0 1 0 0 1 0 ⎦
0 0 0 0 0 0 1

164
Axioms 2018, 7, 5

     
Let E = { ( TC, CC ), 0.1, 0.1, 01 , ( TC, AD ), 0.1, 0.2, 0.1 , ( TC, RE), 0.1, 0.2, 0.1 ,
     
(CC, VL), 0.8, 0.1, 0.5 , (CC, TT ), 0.8, 0.1, 0.5 , ( AD, RE), 0.1, 0.2, 0.1 ,
 
( TT, VL), 0.8, 0.6, 0.1 }

be a neutrosophic set on E∗ which represents relationship between companies and SE = (SE, SE)
a rough neutrosophic relation, where SE and SE are lower and upper upper approximations of E,
respectively, as follows:
     
SE = { ( TC, CC ), 0.1, 0.1, 0.1 , ( TC, AD ), 0.1, 0.2, 0.1 , ( TC, RE), 0.1, 0.2, 0.1 ,
     
(CC, VL), 0.8, 0.1, 0.5 , (CC, TT ), 0.8, 0.1, 0.5 , ( AD, RE), 0.1, 0.2, 0.1 ,
 
( TT, VL), 0.8, 0.6, 0.1 },
     
SE = { ( TC, CC ), 0.1, 0.1, 0.1 , ( TC, AD ), 0.1, 0.2, 0.1 , ( TC, RE), 0.1, 0.2, 0.1 ,
     
(CC, VL), 0.8, 0.1, 0.5 , (CC, TT ), 0.8, 0.1, 0.5 , ( AD, RE)0.1, 0.2, 0.1 ,
 
( TT, VL), 0.8, 0.6, 0.1 }.

Thus, G = ( TV, SE) and G = ( TV, SE) is a rough neutrosophic digraph as shown in Figure 14.

.
Figure 14. Rough neutrosophic digraph G = ( G, G ).

To find out the most suitable investment company, we define the score values

T (v j ) + I (v j ) − F (v j )
S ( vi ) = ∑ 3 − ( T (vi v j ) + I (vi v j ) − F (vi v j ))
,
vi v j ∈ E ∗

where
T (v j )+ T (v j )
T (v j ) = 2 ,
I (v j )+ I (v j )
I (v j ) = 2 ,
F (v j )+ F (v j )
F (v j ) = 2 ,

and

165
Axioms 2018, 7, 5

T (vi v j )+ T (vi v j )
T ( vi v j ) = 2 ,
I (vi v j )+ I (vi v j )
I ( vi v j ) = 2 ,
F (vi v j )+ F (vi v j )
F ( vi v j ) = 2 .

of each selected company and industry decision is vk if vk = max S(vi ). By calculation, we have
i
S( TC ) = 0.4926, S(CC) = 1.4038, S(RE) = 0.0667, S(VL) = 0.3833, S(AD) = 0.1429 and S(TT) = 1.3529.
Clearly, CC is the optimal decision. Therefore, the carpenter company is selected to get maximum
possible profit. We present our proposed method as an algorithm. This Algorithm 1 returns the optimal
solution for the investment problem.

Algorithm 1 Calculation of Optimal decision


1: Input the vertex set V ∗ .
2: Construct an equivalence relation T on the set V ∗ .
3: Calculate the approximation sets TV and TV.
4: Input the edge set E∗ ⊆ V ∗ × V ∗ .
5: Construct an equivalence relation S on E∗ .
6: Calculate the approximation sets SE and SE.
7: Calculate the score value, by using formula
T (v j ) + I (v j ) − F (v j )
S ( vi ) = ∑ 3 − ( T (vi v j ) + I (vi v j ) − F (vi v j ))
.
vi v j ∈ E ∗

8: The decision is S(vk ) = max∗ S(vi ).


vi ∈V
9: If vk has more than one value, then any one of S(vk ) may be chosen.

4. Conclusions and Future Directions


Neutrosophic sets and rough sets are very important models to handle uncertainty from two
different perspectives. A rough neutrosophic model is a hybrid model which is made by combining
two mathematical models, namely, rough sets and neutrosophic sets. This hybrid model deals with soft
computing and vagueness by using the lower and upper approximation spaces. A rough neutrosophic
set model gives more precise results for decision-making problems as compared to neutrosophic set
model. In this paper, we have introduced the notion of rough neutrosophic digraphs. This research
work can be extended to: (1) rough bipolar neutrosophic soft graphs; (2) bipolar neutrosophic soft
rough graphs; (3) interval-valued bipolar neutrosophic rough graphs; and (4) neutrosophic soft
rough graphs.

Acknowledgments: The authors are very thankful to the Editor and referees for their valuable comments and
suggestions for improving the paper.
Author Contributions: Sidra Sayed and Nabeela Ishfaq conceived and designed the experiments; Muhammad
Akram performed the experiments; Florentin Smarandache contributed reagents/materials/analysis tools.
Conflicts of Interest: The authors declare that they have no conflict of interest regarding the publication of the
research article.

References
1. Smarandache, F. A Unifying Field in Logics. Neutrosophy: Neutrosophic Probability, Set and Logic;
American Research Press: Rehoboth, NM, USA, 1999.
2. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353.
3. Smarandache, F. Neutrosophy Neutrosophic Probability, Set, and Logic; American Research Press:
Rehoboth, NM, USA, 1998.

166
Axioms 2018, 7, 5

4. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single-valued neutrosophic sets. Multispace Multistructure
2010, 4, 410–413.
5. Ye, J. Multicriteria decision-making method using the correlation coefficient under single-valued
neutrosophic environment. Int. J. Gen. Syst. 2013, 42, 386–394.
6. Ye, J. Improved correlation coefficients of single valued neutrosophic sets and interval neutrosophic sets for
multiple attribute decision making. J. Intell. Fuzzy Syst. 2014, 27, 2453–2462.
7. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356.
8. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209.
9. Liu, P.; Chen, S.M. Group decision making based on Heronian aggregation operators of intuitionistic fuzzy
numbers. IEEE Trans. Cybern. 2017, 47, 2514–2530.
10. Broumi, S.; Smarandache, F.; Dhar, M. Rough neutrosophic sets. Neutrosophic Sets Syst. 2014, 3, 62–67.
11. Yang, H.L.; Zhang, C.L.; Guo, Z.L.; Liu, Y.L.; Liao, X. A hybrid model of single valued neutrosophic
sets and rough sets: Single valued neutrosophic rough set model. Soft Comput. 2016, 21, 6253–6267,
doi:10.1007/s00500-016-2356-y.
12. Mordeson, J.N.; Peng, C.S. Operations on fuzzy graphs. Inf. Sci. 1994, 79, 159–170.
13. Akram, M.; Shahzadi, S. Neutrosophic soft graphs with applicatin. J. Intell. Fuzzy Syst. 2017, 32, 841–858.
14. Akram, M.; Sarwar, M. Novel multiple criteria decision making methods based on bipolar neutrosophic sets
and bipolar neutrosophic graphs. Ital. J. Pure Appl. Math. 2017, 38, 368–389.
15. Akram, M.; Siddique, S. Neutrosophic competition graphs with applications. J. Intell. Fuzzy Syst. 2017,
33, 921–935.
16. Akram, M.; Sitara, M. Interval-valued neutrosophic graph structures. Punjab Univ. J. Math. 2018, 50, 113–137.
17. Zafer, F.; Akram, M. A novel decision-making method based on rough fuzzy information. Int. J. Fuzzy Syst.
2017, doi:10.1007/s40815-017-0368-0.
18. Banerjee, M.; Pal, S.K. Roughness of a fuzzy set. Inf. Sci. 1996, 93, 235–246.
19. Liu, P.; Chen, S.M.; Junlin, L. Some intuitionistic fuzzy interaction partitioned Bonferroni mean operators
and their application to multi-attribute group decision making. Inf. Sci. 2017, 411, 98–121.
20. Liu, P. Multiple attribute group decision making method based on interval-valued intuitionistic fuzzy power
Heronian aggregation operators. Comput. Ind. Eng. 2017, 108, 199–212.
21. Zhang, X.; Dai, J.; Yu, Y. On the union and intersection operations of rough sets based on various
approximation spaces. Inf. Sci. 2015, 292, 214–229.
22. Bao, Y.L.; Yang, H.L. On single valued neutrosophic refined rough set model and its application.
J. Intell. Fuzzy Syst. 2017, 33, 1235–1248.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

167
axioms
Article
Neutrosophic Positive Implicative N -Ideals
in BCK-Algebras
Young Bae Jun 1 , Florentin Smarandache 2 , Seok-Zun Song 3,∗ and Madad Khan 4
1 Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea;
[email protected]
2 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave., Gallup, NM 87301, USA;
[email protected]
3 Department of Mathematics, Jeju National University, Jeju 63243, Korea
4 Department of Mathematics, COMSATS Institute of Information Technology, Abbottabad 45550, Pakistan;
[email protected]
* Correspondence: [email protected]

Received: 30 October 2017; Accepted: 13 January 2018; Published: 15 January 2018

Abstract: The notion of a neutrosophic positive implicative N -ideal in BCK-algebras is introduced,


and several properties are investigated. Relations between a neutrosophic N -ideal and a neutrosophic
positive implicative N -ideal are discussed. Characterizations of a neutrosophic positive implicative
N -ideal are considered. Conditions for a neutrosophic N -ideal to be a neutrosophic positive
implicative N -ideal are provided. An extension property of a neutrosophic positive implicative
N -ideal based on the negative indeterminacy membership function is discussed.

Keywords: neutrosophic N -structure; neutrosophic N -ideal; neutrosophic positive implicative N -ideal

MSC: 06F35; 03G25; 03B52

1. Introduction
There are many real-life problems which are beyond a single expert. It is because of the need to
involve a wide domain of knowledge. As a generalization of the intuitionistic fuzzy set, paraconsistent
set and intuitionistic set, the neutrosophic logic and set is introduced by F. Smarandache [1] and it is
a useful tool to deal with uncertainty in several social and natural aspects. Neutrosophy provides a
foundation for a whole family of new mathematical theories with the generalization of both classical
and fuzzy counterparts. In a neutrosophic set, an element has three associated defining functions
such as truth membership function (T), indeterminate membership function (I) and false membership
function (F) defined on a universe of discourse X. These three functions are independent completely.
The neutrosophic set has vast applications in various fields (see [2–6]).
In order to provide mathematical tool for dealing with negative information, Y. B. Jun, K. J. Lee
and S. Z. Song [7] introduced the notion of negative-valued function, and constructed N -structures.
M. Khan, S. Anis, F. Smarandache and Y. B. Jun [8] introduced the notion of neutrosophic N -structures,
and it is applied to semigroups (see [8]) and BCK/BCI-algebras (see [9]). S. Z. Song, F. Smarandache
and Y. B. Jun [10] studied a neutrosophic commutative N -ideal in BCK-algebras. As well-known,
BCK-algebras originated from two different ways: one of them is based on set theory, and another
is from classical and non-classical propositional calculi (see [11]). The bounded commutative
BCK-algebras are precisely MV-algebras. For MV-algebras, see [12]. The background of this study is
displayed in the second section. In the third section, we introduce the notion of a neutrosophic positive
implicative N -ideal in BCK-algebras, and investigate several properties. We discuss relations between
a neutrosophic N -ideal and a neutrosophic positive implicative N -ideal, and provide conditions for a

Axioms 2018, 7, 3; doi:10.3390/axioms7010003 168 www.mdpi.com/journal/axioms


Axioms 2018, 7, 3

neutrosophic N -ideal to be a neutrosophic positive implicative N -ideal. We consider characterizations


of a neutrosophic positive implicative N -ideal. We establish an extension property of a neutrosophic
positive implicative N -ideal based on the negative indeterminacy membership function. Conclusions
are provided in the final section.

2. Preliminaries
By a BCI-algebra we mean a set X with a binary operation “∗” and a special element “0” in which
the following conditions are satisfied:

(I) (( x ∗ y) ∗ ( x ∗ z)) ∗ (z ∗ y) = 0,
(II) ( x ∗ ( x ∗ y)) ∗ y = 0,
(III) x ∗ x = 0,
(IV) x∗y = y∗x = 0 ⇒ x = y

for all x, y, z ∈ X. By a BCK-algebra, we mean a BCI-algebra X satisfying the condition

(∀ x ∈ X )(0 ∗ x = 0).

A partial ordering ! on X is defined by

(∀ x, y ∈ X ) ( x ! y ⇒ x ∗ y = 0).

Every BCK/BCI-algebra X verifies the following properties.

(∀ x ∈ X ) ( x ∗ 0 = x ), (1)
(∀ x, y, z ∈ X ) (( x ∗ y) ∗ z = ( x ∗ z) ∗ y). (2)

Let I be a subset of a BCK/BCI-algebra. Then I is called an ideal of X if it satisfies the


following conditions.

0 ∈ I, (3)
(∀ x, y ∈ X ) ( x ∗ y ∈ I, y ∈ I ⇒ x ∈ I ) . (4)

Let I be a subset of a BCK-algebra. Then I is called a positive implicative ideal of X if the Condition (3)
holds and the following assertion is valid.

(∀ x, y, z ∈ X ) (( x ∗ y) ∗ z ∈ I, y ∗ z ∈ I ⇒ x ∗ z ∈ I ) . (5)

Any positive implicative ideal is an ideal, but the converse is not true (see [13]).

Lemma 1 ([13]). A subset I of a BCK-algebra X is a positive implicative ideal of X if and only if I is an ideal of
X which satisfies the following condition.

(∀ x, y ∈ X ) (( x ∗ y) ∗ y ∈ I ⇒ x ∗ y ∈ I ) . (6)

We refer the reader to the books [13,14] for further information regarding BCK/BCI-algebras.
For any family { ai | i ∈ Λ} of real numbers, we define

{ ai | i ∈ Λ} := sup{ ai | i ∈ Λ}

and 
{ ai | i ∈ Λ} := inf{ ai | i ∈ Λ}.

169
Axioms 2018, 7, 3

We denote the collection of functions from a set X to [−1, 0] by F ( X, [−1, 0]). An element of
F ( X, [−1, 0]) is called a negative-valued function from X to [−1, 0] (briefly, N -function on X). An ordered
pair ( X, f ) of X and an N -function f on X is called an N -structure (see [7]).
A neutrosophic N -structure over a nonempty universe of discourse X (see [8]) is defined to be
the structure 2 3
XN := (T ( x),I x( x),F ( x)) | x ∈ X (7)
N N N

where TN , IN and FN are N -functions on X which are called the negative truth membership function,
the negative indeterminacy membership function and the negative falsity membership function, respectively,
on X.
For the sake of simplicity, we will use the notation XN or XN := (T ,IX ,F ) instead of the
N N N
neutrosophic N -structure in (7).
Recall that every neutrosophic N -structure XN over X satisfies the following condition:

(∀ x ∈ X ) (−3 ≤ TN ( x ) + IN ( x ) + FN ( x ) ≤ 0) .

3. Neutrosophic Positive Implicative N -ideals


In what follows, let X denote a BCK-algebra unless otherwise specified.

Definition 1 ([9]). Let XN be a neutrosophic N -structure over X. Then XN is called a neutrosophic N -ideal
of X if the following condition holds.
⎛ 1 ⎞
TN (0) ≤ TN ( x ) ≤ { TN ( x ∗ y), TN (y)}
⎜ 0 ⎟
(∀ x, y ∈ X ) ⎝ IN (0) ≥ IN ( x ) ≥ { IN ( x ∗ y), IN (y)} ⎠. (8)
1
FN (0) ≤ FN ( x ) ≤ { FN ( x ∗ y), FN (y)}

Definition 2. A neutrosophic N -structure XN over X is called a neutrosophic positive implicative N -ideal of


X if the following assertions are valid.

(∀ x ∈ X ) ( TN (0) ≤ TN ( x ), IN (0) ≥ IN ( x ), FN (0) ≤ FN ( x )) , (9)


⎛ 1 ⎞
TN ( x ∗ z) ≤ { TN (( x ∗ y) ∗ z), TN (y ∗ z)}
⎜ 0 ⎟
(∀ x, y, z ∈ X ) ⎝ IN ( x ∗ z) ≥ { IN (( x ∗ y) ∗ z), IN (y ∗ z)} ⎠ . (10)
1
FN ( x ∗ z) ≤ { FN (( x ∗ y) ∗ z), FN (y ∗ z)}

Example 1. Let X = {0, 1, 2, 3, 4} be a BCK-algebra with the Cayley table in Table 1.

Table 1. Cayley table for the binary operation “∗”.

* 0 1 2 3 4
0 0 0 0 0 0
1 1 0 0 1 0
2 2 2 0 2 0
3 3 3 3 0 3
4 4 4 4 4 0

Let
2 3
XN = 0
, 1
, 2
, 3
, 4
(−0.9,−0.2,−0.7) (−0.7,−0.6,−0.7) (−0.5,−0.7,−0.6) (−0.1,−0.4,−0.4) (−0.3,−0.8,−0.2)

be a neutrosophic N -structure over X. Then XN is a neutrosophic positive implicative N -ideal of X.

170
Axioms 2018, 7, 3

If we take z = 0 in (10) and use (1), then we have the following theorem.

Theorem 1. Every neutrosophic positive implicative N -ideal is a neutrosophic N -ideal.

The following example shows that the converse of Theorem 1 does not holds.

Example 2. Let X = {0, a, b, c} be a BCK-algebra with the Cayley table in Table 2.

Table 2. Cayley table for the binary operation “∗”.

* 0 a b c
0 0 0 0 0
a a 0 0 a
b b a 0 b
c c c c 0

Let
2 3
XN = 0
, a
, b
, c
( t0 , i2 , f 0 ) ( t1 , i1 , f 2 ) ( t1 , i1 , f 2 ) ( t2 , i0 , f 1 )

be a a neutrosophic N -structure over X where t0 < t1 < t2 , i0 < i1 < i2 and f 0 < f 1 < f 2 in [−1, 0]. Then
XN is a neutrosophic N -ideal of X. But it is not a neutrosophic positive implicative N -ideal of X since

TN (b ∗ a) = TN ( a) = t1  t0 = { TN ((b ∗ a) ∗ a), TN ( a ∗ a)},

I N ( b ∗ a ) = I N ( a ) = i1  i2 = { IN ((b ∗ a) ∗ a), IN ( a ∗ a)},

or

FN (b ∗ a) = FN ( a) = f 2  f 0 = { FN ((b ∗ a) ∗ a), FN ( a ∗ a)}.

Given a neutrosophic N -structure XN over X and α, β, γ ∈ [−1, 0] with −3 ≤ α + β + γ ≤ 0,


we define the following sets.

α
TN := { x ∈ X | TN ( x ) ≤ α},
β
I N : = { x ∈ X | I N ( x ) ≥ β },
γ
FN := { x ∈ X | FN ( x ) ≤ γ}.

Then we say that the set

XN (α, β, γ) := { x ∈ X | TN ( x ) ≤ α, IN ( x ) ≥ β, FN ( x ) ≤ γ}

is the (α, β, γ)-level set of XN (see [9]). Obviously, we have

α β γ
XN (α, β, γ) = TN ∩ IN ∩ FN .

α , I and F are positive β γ


Theorem 2. If XN is a neutrosophic positive implicative N -ideal of X, then TN N N
implicative ideals of X for all α, β, γ ∈ [−1, 0] with −3 ≤ α + β + γ ≤ 0 whenever they are nonempty.

β γ
α , I and F are nonempty for all α, β, γ ∈ [−1, 0] with −3 ≤ α + β + γ ≤ 0.
Proof. Assume that TN N N
α, β γ
Then x ∈ TN y ∈ IN and z ∈ FN for some x, y, z ∈ X. Thus TN (0) ≤ TN ( x ) ≤ α, IN (0) ≥

171
Axioms 2018, 7, 3

β γ
α ∩ I ∩ F . Let ( x ∗ y ) ∗ z ∈ T α and y ∗ z ∈ T α .
IN (y) ≥ β, and FN (0) ≤ FN (z) ≤ γ, that is, 0 ∈ TN N N N N
Then TN (( x ∗ y) ∗ z) ≤ α and TN (y ∗ z) ≤ α, which imply that

TN ( x ∗ z) ≤ { TN (( x ∗ y) ∗ z), TN (y ∗ z)} ≤ α,

β β
α . If ( a ∗ b ) ∗ c ∈ I and b ∗ c ∈ I , then I (( a ∗ b ) ∗ c ) ≥ β and I ( b ∗ c ) ≥ β. Thus
that is, x ∗ z ∈ TN N N N N

IN ( a ∗ c) ≥ { IN (( a ∗ b) ∗ c), IN (b ∗ c)} ≥ β,

β γ γ
and so a ∗ c ∈ IN . Finally, suppose that (u ∗ v) ∗ w ∈ FN and v ∗ w ∈ FN . Then FN ((u ∗ v) ∗ w) ≤ γ and
FN (v ∗ w) ≤ γ. Thus

FN (u ∗ w) ≤ { FN ((u ∗ v) ∗ w), FN (v ∗ w)} ≤ γ,

γ β γ
α , I and F are positive implicative ideals of X.
that is, u ∗ w ∈ FN . Therefore TN N N

Corollary 1. Let XN be a neutrosophic N -structure over X and let α, β, γ ∈ [−1, 0] be such that −3 ≤
α + β + γ ≤ 0. If XN is a neutrosophic positive implicative N -ideal of X, then the nonempty (α, β, γ)-level set
of XN is a positive implicative ideal of X.

Proof. Straightforward.

The following example illustrates Theorem 2.

Example 3. Let X = {0, 1, 2, 3, 4} be a BCK-algebra with the Cayley table in Table 3.

Table 3. Cayley table for the binary operation “∗”.

* 0 1 2 3 4
0 0 0 0 0 0
1 1 0 1 1 0
2 2 2 0 2 0
3 3 3 3 0 0
4 4 4 4 4 0

Let
2 3
XN = 0
, 1
, 2
, 3
, 4
(−0.8,−0.3,−0.7) (−0.7,−0.6,−0.4) (−0.4,−0.4,−0.5) (−0.3,−0.5,−0.6) (−0.2,−0.9,−0.1)

be a neutrosophic N -structure over X. Routine calculations show that XN is a neutrosophic positive implicative
N -ideal of X. Then


⎪ ∅ if α ∈ [−1, −0.8),



⎪ {0} if α ∈ [−0.8, −0.7),

⎨ {0, 1}
α if α ∈ [−0.7, −0.4),
TN =
⎪ {0, 1, 2}
⎪ if α ∈ [−0.4, −0.3),



⎪ {0, 1, 2, 3} if α ∈ [−0.3, −0.2),


X if α ∈ [−0.2, 0],

172
Axioms 2018, 7, 3



⎪ ∅ if β ∈ (−0.3, 0],



⎪ {0} if β ∈ (−0.4, −0.3],

⎨ {0, 2}
β if β ∈ (−0.5, −0.4],
IN =
⎪ {0, 2, 3}
⎪ if β ∈ (−0.6, −0.5],



⎪ {0, 1, 2, 3} if β ∈ (−0.9, −0.6],


X if β ∈ [−1, −0.9],

and


⎪ ∅ if γ ∈ [−1, −0.7),



⎪ {0} if γ ∈ [−0.7, −0.6),

⎨ {0, 3}
γ if γ ∈ [−0.6, −0.5),
FN =
⎪ {0, 2, 3}
⎪ if γ ∈ [−0.5, −0.4),



⎪ {0, 1, 2, 3} if γ ∈ [−0.4, −0.1),


X if γ ∈ [−0.1, 0],

which are positive implicative ideals of X.

Lemma 2 ([9]). Every neutrosophic N -ideal XN of X satisfies the following assertions:

( x, y ∈ X ) ( x ! y ⇒ TN ( x ) ≤ TN (y), IN ( x ) ≥ IN (y), FN ( x ) ≤ FN (y)) . (11)

We discuss conditions for a neutrosophic N -ideal to be a neutrosophic positive implicative


N -ideal.

Theorem 3. Let XN be a neutrosophic N -ideal of X. Then XN is a neutrosophic positive implicative N -ideal


of X if and only if the following assertion is valid.
⎛ ⎞
TN ( x ∗ y) ≤ TN (( x ∗ y) ∗ y),
⎜ ⎟
(∀ x, y ∈ X ) ⎝ IN ( x ∗ y) ≥ IN (( x ∗ y) ∗ y), ⎠ . (12)
FN ( x ∗ y) ≤ FN (( x ∗ y) ∗ y)

Proof. Assume that XN is a neutrosophic positive implicative N -ideal of X. If z is replaced by y


in (10), then

TN ( x ∗ y) ≤ { TN (( x ∗ y) ∗ y), TN (y ∗ y)}

= { TN (( x ∗ y) ∗ y), TN (0)} = TN (( x ∗ y) ∗ y),


IN ( x ∗ y) ≥ { IN (( x ∗ y) ∗ y), IN (y ∗ y)}

= { IN (( x ∗ y) ∗ y), IN (0)} = IN (( x ∗ y) ∗ y),

and

FN ( x ∗ y) ≤ { FN (( x ∗ y) ∗ y), FN (y ∗ y)}

= { FN (( x ∗ y) ∗ y), FN (0)} = FN (( x ∗ y) ∗ y)

by (III) and (9).


Conversely, let XN be a neutrosophic N -ideal of X satisfying (12). Since

(( x ∗ z) ∗ z) ∗ (y ∗ z) ! ( x ∗ z) ∗ y = ( x ∗ y) ∗ z

173
Axioms 2018, 7, 3

for all x, y, z ∈ X, we have


⎛ ⎞
TN ((( x ∗ z) ∗ z) ∗ (y ∗ z)) ≤ TN (( x ∗ y) ∗ z),
⎜ ⎟
(∀ x, y, z ∈ X ) ⎝ IN ((( x ∗ z) ∗ z) ∗ (y ∗ z)) ≥ IN (( x ∗ y) ∗ z), ⎠ .
FN ((( x ∗ z) ∗ z) ∗ (y ∗ z)) ≤ FN (( x ∗ y) ∗ z)

by Lemma 2. It follows from (8) and (12) that

TN ( x ∗ z) ≤ TN (( x ∗ z) ∗ z)

≤ { TN ((( x ∗ z) ∗ z) ∗ (y ∗ z)), TN (y ∗ z)}

≤ { TN (( x ∗ y) ∗ z), TN (y ∗ z)},

IN ( x ∗ z) ≥ IN (( x ∗ z) ∗ z)

≥ { IN ((( x ∗ z) ∗ z) ∗ (y ∗ z)), IN (y ∗ z)}

≥ { IN (( x ∗ y) ∗ z), IN (y ∗ z)},

and

FN ( x ∗ z) ≤ FN (( x ∗ z) ∗ z)

≤ { FN ((( x ∗ z) ∗ z) ∗ (y ∗ z)), FN (y ∗ z)}

≤ { FN (( x ∗ y) ∗ z), FN (y ∗ z)}.

Therefore XN is a neutrosophic positive implicative N -ideal of X.

Lemma 3 ([9]). For any neutrosophic N -ideal XN of X, we have


⎛ ⎧ 1 ⎞
⎨ TN ( x ) ≤ 0 { TN (y), TN (z)}

⎜ ⎟
(∀ x, y, z ∈ X ) ⎝ x ∗ y ! z ⇒ IN ( x ) ≥ { IN (y), IN (z)} ⎠. (13)

⎩ 1
FN ( x ) ≤ { FN (y), FN (z)}

Lemma 4. If a neutrosophic N -structure XN over X satisfies the condition (13), then XN is a neutrosophic
N -ideal of X.

Proof. Since 0 ∗ x ! x for all x ∈ X, we have TN (0) ≤ TN ( x ), IN (0) ≥ IN ( x ) and FN (0) ≤ FN ( x )


for all x ∈ X by (13). Note that x ∗ ( x ∗ y) ! y for all x, y ∈ X. It follows from (13) that
1 0 1
TN ( x ) ≤ { TN ( x ∗ y), TN (y)}, IN ( x ) ≥ { IN ( x ∗ y), IN (y)}, and FN ( x ) ≤ { FN ( x ∗ y), FN (y)}
for all x, y ∈ X. Therefore XN is a neutrosophic N -ideal of X.

Theorem 4. For any neutrosophic N -structure XN over X, the following assertions are equivalent.

(1) XN is a neutrosophic positive implicative N -ideal of X.


(2) XN satisfies the following condition.
⎧ 1
⎨ TN ( x ∗ y) ≤ 0 { TN ( a), TN (b)},

(( x ∗ y) ∗ y) ∗ a ! b ⇒ IN ( x ∗ y) ≥ { IN ( a), IN (b)}, (14)

⎩ 1
FN ( x ∗ y) ≤ { FN ( a), FN (b)},

for all x, y, a, b ∈ X.

174
Axioms 2018, 7, 3

Proof. Suppose that XN is a neutrosophic positive implicative N -ideal of X. Then XN is a neutrosophic


N -ideal of X by Theorem 1. Let x, y, a, b ∈ X be such that (( x ∗ y) ∗ y) ∗ a ! b. Then

TN ( x ∗ y) ≤ TN ((( x ∗ y) ∗ y)) ≤ { TN ( a), TN (b)},

IN ( x ∗ y) ≥ IN ((( x ∗ y) ∗ y)) ≥ { IN ( a), IN (b)},

FN ( x ∗ y) ≤ FN ((( x ∗ y) ∗ y)) ≤ { FN ( a), FN (b)}

by Theorem 3 and Lemma 3.


Conversely, let XN be a neutrosophic N -structure over X that satisfies (14). Let x, a, b ∈ X be such
that x ∗ a ! b. Then (( x ∗ 0) ∗ 0) ∗ a ! b, and so

TN ( x ) = TN ( x ∗ 0) ≤ { TN ( a), TN (b)},

I N ( x ) = I N ( x ∗ 0) ≥ { IN ( a), IN (b)},

FN ( x ) = FN ( x ∗ y) ≤ { FN ( a), FN (b)}.

Hence XN is a neutrosophic N -ideal of X by Lemma 4. Since (( x ∗ y) ∗ y) ∗ (( x ∗ y) ∗ y) ! 0, it


follows from (14) and (9) that

TN ( x ∗ y) ≤ { TN (( x ∗ y) ∗ y), TN (0)} = TN (( x ∗ y) ∗ y),

IN ( x ∗ y) ≥ { IN (( x ∗ y) ∗ y), IN (0)} = IN (( x ∗ y) ∗ y),

FN ( x ∗ y) ≤ { FN (( x ∗ y) ∗ y), FN (0)} = FN (( x ∗ y) ∗ y),

for all x, y ∈ X. Therefore XN is a neutrosophic positive implicative N -ideal of X by Theorem 3.

β
α , I and F are ideals of X γ
Lemma 5 ([9]). Let XN be a neutrosophic N -structure over X and assume that TN N N
for all α, β, γ ∈ [−1, 0] with −3 ≤ α + β + γ ≤ 0. Then XN is a neutrosophic N -ideal of X.

α , I and F are positive β γ


Theorem 5. Let XN be a neutrosophic N -structure over X and assume that TN N N
implicative ideals of X for all α, β, γ ∈ [−1, 0] with −3 ≤ α + β + γ ≤ 0. Then XN is a neutrosophic positive
implicative N -ideal of X.

β γ
α , I and F are positive implicative ideals of X, then T α , I and F are ideals of X. β γ
Proof. If TN N N N N N
Thus XN is a neutrosophic N -ideal of X by Lemma 5. Let x, y ∈ X and α, β, γ ∈ [−1, 0] with
−3 ≤ α + β + γ ≤ 0 such that TN (( x ∗ y) ∗ y) = α, IN (( x ∗ y) ∗ y) = β and FN (( x ∗ y) ∗ y) = γ. Then
β β
( x ∗ y) ∗ y ∈ TNα ∩ IN ∩ FNγ . Since TNα ∩ IN ∩ FNγ is a positive implicative ideal of X, it follows from
β
α ∩ I ∩ F . Hence γ
Lemma 1 that x ∗ y ∈ TN N N

TN ( x ∗ y) ≤ α = TN (( x ∗ y) ∗ y),
IN ( x ∗ y) ≥ β = IN (( x ∗ y) ∗ y),
FN ( x ∗ y) ≤ γ = FN (( x ∗ y) ∗ y).

Therefore XN is a neutrosophic positive implicative N -ideal of X by Theorem 3.

175
Axioms 2018, 7, 3

Lemma 6 ([9]). Let XN be a neutrosophic N -ideal of X. Then XN satisfies the condition (12) if and only if it
satisfies the following condition.
⎛ ⎞
TN (( x ∗ z) ∗ (y ∗ z)) ≤ TN (( x ∗ y) ∗ z),
⎜ ⎟
(∀ x, y, z ∈ X ) ⎝ IN (( x ∗ z) ∗ (y ∗ z)) ≥ IN (( x ∗ y) ∗ z), ⎠ . (15)
FN (( x ∗ z) ∗ (y ∗ z)) ≤ FN (( x ∗ y) ∗ z)

Corollary 2. Let XN be a neutrosophic N -ideal of X. Then XN is a neutrosophic positive implicative N -ideal


of X if and only if XN satisfies (15).

Proof. It follows from Theorem 3 and Lemma 6.

Theorem 6. For any neutrosophic N -structure XN over X, the following assertions are equivalent.

(1) XN is a neutrosophic positive implicative N -ideal of X.


(2) XN satisfies the following condition.
⎧ 1
⎨ TN (( x ∗ z) ∗ (y ∗ z)) ≤ 0 { TN ( a), TN (b)},

(( x ∗ y) ∗ z) ∗ a ! b ⇒ IN (( x ∗ z) ∗ (y ∗ z)) ≥ { IN ( a), IN (b)}, (16)

⎩ 1
FN (( x ∗ z) ∗ (y ∗ z)) ≤ { FN ( a), FN (b)},

for all x, y, z, a, b ∈ X.

Proof. Suppose that XN is a neutrosophic positive implicative N -ideal of X. Then XN is a neutrosophic


N -ideal of X by Theorem 1. Let x, y, z, a, b ∈ X be such that (( x ∗ y) ∗ z) ∗ a ! b. Using Corollary 2 and
Lemma 3, we have

TN (( x ∗ z) ∗ (y ∗ z)) ≤ TN ((( x ∗ y) ∗ z)) ≤ { TN ( a), TN (b)},

IN (( x ∗ z) ∗ (y ∗ z)) ≥ IN ((( x ∗ y) ∗ z)) ≥ { IN ( a), IN (b)},

FN (( x ∗ z) ∗ (y ∗ z)) ≤ FN ((( x ∗ y) ∗ z)) ≤ { FN ( a), FN (b)}

for all x, y, z, a, b ∈ X.
Conversely, let XN be a neutrosophic N -structure over X that satisfies (16). Let x, y, a, b ∈ X be
such that (( x ∗ y) ∗ y) ∗ a ! b. Then

TN ( x ∗ y) = TN (( x ∗ y) ∗ (y ∗ y)) ≤ { TN ( a), TN (b)},

IN ( x ∗ y) = IN (( x ∗ y) ∗ (y ∗ y)) ≥ { IN ( a), IN (b)},

FN ( x ∗ y) = FN (( x ∗ y) ∗ (y ∗ y)) ≤ { FN ( a), FN (b)}

by (III), (1) and (16). It follows from Theorem 4 that XN is a neutrosophic positive implicative N -ideal
of X.

Theorem 7. Let XN be a neutrosophic N -structure over X. Then XN is a neutrosophic positive implicative


N -ideal of X if and only if XN satisfies (9) and
⎛ 1 ⎞
TN ( x ∗ y) ≤ { TN ((( x ∗ y) ∗ y) ∗ z), TN (z)},
⎜ 0 ⎟
(∀ x, y, z ∈ X ) ⎝ IN ( x ∗ y) ≥ { IN ((( x ∗ y) ∗ y) ∗ z), IN (z)}, ⎠. (17)
1
FN ( x ∗ y) ≤ { FN ((( x ∗ y) ∗ y) ∗ z), FN (z)}

176
Axioms 2018, 7, 3

Proof. Assume that XN is a neutrosophic positive implicative N -ideal of X. Then XN is a neutrosophic


N -ideal of X by Theorem 1, and so the condition (9) is valid. Using (8), (III), (1), (2) and (15), we have

TN ( x ∗ y) ≤ { TN (( x ∗ y) ∗ z), TN (z)}

= { TN ((( x ∗ z) ∗ y) ∗ (y ∗ y)), TN (z)}

≤ { TN ((( x ∗ z) ∗ y) ∗ y), TN (z)}

= { TN ((( x ∗ y) ∗ y) ∗ z), TN (z)},


IN ( x ∗ y) ≥ { IN (( x ∗ y) ∗ z), IN (z)}

= { IN ((( x ∗ z) ∗ y) ∗ (y ∗ y)), IN (z)}

≥ { IN ((( x ∗ z) ∗ y) ∗ y), IN (z)}

= { IN ((( x ∗ y) ∗ y) ∗ z), IN (z)},

and

FN ( x ∗ y) ≤ { FN (( x ∗ y) ∗ z), FN (z)}

= { FN ((( x ∗ z) ∗ y) ∗ (y ∗ y)), FN (z)}

≤ { FN ((( x ∗ z) ∗ y) ∗ y), FN (z)}

= { FN ((( x ∗ y) ∗ y) ∗ z), FN (z)}

for all x, y, z ∈ X. Therefore (17) is valid.


Conversely, if XN is a neutrosophic N -structure over X satisfying two Conditions (9) and (17), then
 
TN ( x ) = TN ( x ∗ 0) ≤ { TN ((( x ∗ 0) ∗ 0) ∗ z), TN (z)} = { TN ( x ∗ z), TN (z)},
 
I N ( x ) = I N ( x ∗ 0) ≥ { IN ((( x ∗ 0) ∗ 0) ∗ z), IN (z)} = { IN ( x ∗ z), IN (z)},
 
FN ( x ) = FN ( x ∗ 0) ≤ { FN ((( x ∗ 0) ∗ 0) ∗ z), FN (z)} = { FN ( x ∗ z), FN (z)}

for all x, z ∈ X. Hence XN is a neutrosophic N -ideal of X. Now, if we take z = 0 in (17) and use (1), then

TN ( x ∗ y) ≤ { TN ((( x ∗ y) ∗ y) ∗ 0), TN (0)}

= { TN (( x ∗ y) ∗ y), TN (0)} = TN (( x ∗ y) ∗ y),


IN ( x ∗ y) ≥ { IN ((( x ∗ y) ∗ y) ∗ 0), IN (0)}

= { IN (( x ∗ y) ∗ y), IN (0)} = IN (( x ∗ y) ∗ y),

and

FN ( x ∗ y) ≤ { FN ((( x ∗ y) ∗ y) ∗ 0), FN (0)}

= { FN (( x ∗ y) ∗ y), FN (0)} = FN (( x ∗ y) ∗ y)

for all x, y ∈ X. It follows from Theorem 3 that XN is a neutrosophic positive implicative N -ideal
of X.

Summarizing the above results, we have a characterization of a neutrosophic positive


implicative N -ideal.

177
Axioms 2018, 7, 3

Theorem 8. For a neutrosophic N -structure XN over X, the following assertions are equivalent.

(1) XN is a neutrosophic positive implicative N -ideal of X.


(2) XN is a neutrosophic N -ideal of X satisfying the condition (12).
(3) XN is a neutrosophic N -ideal of X satisfying the condition (15).
(4) XN satisfies two conditions (9) and (17).
(5) XN satisfies the condition (14).
(6) XN satisfies the condition (3).

For any fixed numbers ξ T , ξ F ∈ [−1, 0), ξ I ∈ (−1, 0] and a nonempty subset G of X, a neutrosophic
N -structure XN
G over X is defined to be the structure

4 5
G
XN := X
= x
|x∈X (18)
( G, IG ,
TN N
G
FN ) ( G ( x ), I G ( x ),
TN N
G (x)
FN )

G , I G and F G are N -functions on X which are given as follows:


where TN N N

ξT if x ∈ G,
G
TN : X → [−1, 0], x →
0 otherwise,


ξI if x ∈ G,
G
IN : X → [−1, 0], x →
−1 otherwise,

and

ξF if x ∈ G,
G
FN : X → [−1, 0], x →
0 otherwise.

Theorem 9. Given a nonempty subset G of X, a neutrosophic N -structure XN G over X is a neutrosophic positive

implicative N -ideal of X if and only if G is a positive implicative ideal of X.

Proof. Assume that G is a positive implicative ideal of X. Since 0 ∈ G, it follows that TN


G (0) = ξ ≤ T G (x),
T N
G (0) = ξ ≥ I G (x), and F G (0) = ξ ≤ F G ( x ) for all x ∈ X. For any x, y, z ∈ X, we consider four cases:
IN I N N F N

Case 1. ( x ∗ y) ∗ z ∈ G and y ∗ z ∈ G,
Case 2. ( x ∗ y) ∗ z ∈ G and y ∗ z ∈
/ G,
Case 3. ( x ∗ y) ∗ z ∈
/ G and y ∗ z ∈ G,
Case 4. ( x ∗ y) ∗ z ∈
/ G and y ∗ z ∈
/ G.
Case 1 implies that x ∗ z ∈ G, and thus

G
TN ( x ∗ z) = TNG (( x ∗ y) ∗ z) = TNG (y ∗ z) = ξ T ,
G
IN ( x ∗ z) = IN
G
(( x ∗ y) ∗ z) = IN
G
(y ∗ z) = ξ I ,
G
FN ( x ∗ z) = FNG (( x ∗ y) ∗ z) = FNG (y ∗ z) = ξ F .

Hence

G
TN ( x ∗ z) ≤ { TNG (( x ∗ y) ∗ z), TNG (y ∗ z)},

G
IN ( x ∗ z) ≥ { IN
G
(( x ∗ y) ∗ z), IN
G
(y ∗ z)},

G
FN ( x ∗ z) ≤ { FNG (( x ∗ y) ∗ z), FNG (y ∗ z)}.

178
Axioms 2018, 7, 3

G ( y ∗ z ) = 0, I G ( y ∗ z ) = −1 and F G ( y ∗ z ) = 0. Thus
If Case 2 is valid, then TN N N

G
TN ( x ∗ z) ≤ 0 = { TNG (( x ∗ y) ∗ z), TNG (y ∗ z)},

G
IN ( x ∗ z ) ≥ −1 = { IN
G
(( x ∗ y) ∗ z), IN
G
(y ∗ z)},

G
FN (x ∗ z) ≤ 0 = { FNG (( x ∗ y) ∗ z), FNG (y ∗ z)}.

For the Case 3, it is similar to the Case 2.


For the Case 4, it is clear that

G
TN ( x ∗ z) ≤ { TNG (( x ∗ y) ∗ z), TNG (y ∗ z)},

G
IN ( x ∗ z) ≥ { IN
G
(( x ∗ y) ∗ z), IN
G
(y ∗ z)},

G
FN (x ∗ z) ≤ { FNG (( x ∗ y) ∗ z), FNG (y ∗ z)}.

G is a neutrosophic positive implicative N -ideal of X.


Therefore XN
  ξT
G is a neutrosophic positive implicative N -ideal of X. Then T G 2 = G,
Conversely, suppose that XN N
 G ξI  G  ξF
IN 2 = G and FN 2 = G are positive implicative ideals of X by Theorem 2.

We consider an extension property of a neutrosophic positive implicative N -ideal based on the


negative indeterminacy membership function.

Lemma 7 ([13]). Let A and B be ideals of X such that A ⊆ B. If A is a positive implicative ideal of X, then so is B.

Theorem 10. Let 2 3


XN : = X
( TN , IN , FN )
= x
( TN ( x ), IN ( x ), FN ( x ))
|x∈X

and 2 3
XM : = X
( TM , I M , FM )
= x
( TM ( x ), I M ( x ), FM ( x ))
|x∈X

be neutrosophic N -ideals of X such that XN (=, ≤, =) XM , that is, TN ( x ) = TM ( x ), IN ( x ) ≤ I M ( x ) and


FN ( x ) = FM ( x ) for all x ∈ X. If XN is a neutrosophic positive implicative N -ideal of X, then so is XM .

α , I and F are β γ
Proof. Assume that XN is a neutrosophic positive implicative N -ideal of X. Then TN N N
positive implicative ideals of X for all α, β, γ ∈ [−1, 0] by Theorem 2. The condition XN (=, ≤, =) XM
ξ ξ ξ ξ ξ ξ
implies that TNT = TMT , INI ⊆ I MI and FNF = FMF . It follows from Lemma 7 that TM α , I β and F γ
M M
are positive implicative ideals of X for all α, β, γ ∈ [−1, 0]. Therefore XM is a neutrosophic positive
implicative N -ideal of X by Theorem 5.

4. Conclusions
The aim of this paper is to study neutrosophic N -structure of positive implicative ideal in
BCK-algebras, and to provide a mathematical tool for dealing with several informations containing
uncertainty, for example, decision making problem, medical diagnosis, graph theory, pattern
recognition, etc. As a more general platform which extends the concepts of the classic set and fuzzy
set, intuitionistic fuzzy set and interval valued intuitionistic fuzzy set, F. Smarandache have developed
neutrosophic set (NS) in [1,15]. In this manuscript, we have discussed the notion of a neutrosophic
positive implicative N -ideal in BCK-algebras, and investigated several properties. We have considered
relations between a neutrosophic N -ideal and a neutrosophic positive implicative N -ideal. We have
provided conditions for a neutrosophic N -ideal to be a neutrosophic positive implicative N -ideal, and
considered characterizations of a neutrosophic positive implicative N -ideal. We have established an
extension property of a neutrosophic positive implicative N -ideal based on the negative indeterminacy
membership function.

179
Axioms 2018, 7, 3

Various sources of uncertainty can be a challenge to make a reliable decision. Based on the results
in this paper, our future research will be focused to solve real-life problems under the opinions of
experts in a neutrosophic set environment, for example, decision making problem, medical diagnosis
etc. The future works also may use the study neutrosophic set theory on several related algebraic
structures, BL-algebras, MTL-algebras, R0 -algebras, MV-algebras, EQ-algebras and lattice implication
algebras etc.
Acknowledgements: The authors thank the academic editor for his valuable comments and
suggestions and the anonymous reviewers for their valuable suggestions. The corresponding author,
Seok-Zun Song, was supported by Basic Science Research Program through the National Research
Foundation of Korea (NRF) funded by the Ministry of Education (No. 2016R1D1A1B02006812).
Author Contributions: This paper is a result of common work of the authors in all aspects.
Conflicts of Interest: The authors declare no conflict of interest

References
1. Smarandache, F. A Unifying Field in Logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set, Neutrosophic
Probability; American Reserch Press: Rehoboth, NM, USA, 1999.
2. Broumi, S.; Smarandache, F. Correlation coefficient of interval neutrosophic sets. Appl. Mech. Mater. 2013, 436,
511–517.
3. Cheng, H.D.; Guo, Y. A new neutrosophic approach to image thresholding. New Math. Nat. Comput. 2008, 4,
291–308.
4. Guo, Y.; Cheng, H.D. New nutrosophic approach to image segmentation. Pat. Recognit. 2009, 42, 587–595.
5. Kharal, A. A neutrosophic multicriteria decision making method. New Math. Nat. Comput. 2014, 10, 143–162.
6. Ye, J. Similarity measures between interval neutrosophic sets and their multicriteria decision-making method.
J. Intell. Fuzzy Syst. 2014, 26, 165–172.
7. Jun, Y.B.; Lee, K.J.; Song, S.Z. N -ideals of BCK/BCI-algebras. J. Chungcheong Math. Soc. 2009, 22, 417–437.
8. Khan, M.; Anis, S.; Smarandache, F.; Jun, Y.B. Neutrosophic N -structures and their applications in semigroups.
Ann. Fuzzy Math. Inform. 2017, 14, 583–598.
9. Jun, Y.B.; Smarandache, F.; Bordbar, H. Neutrosophic N -structures applied to BCK/BCI-algebras. Information
2017, 8, 128.
10. Song, S.Z.; Smarandache, F.; Jun, Y.B. Neutrosophic commutative N -ideals in BCK-algebras. Information 2017,
8, 130.
11. Hong, S.M.; Jun, Y.B.; Ozturk, M.A. Generalization of BCK-algebras. Sci. Math. Jpn. 2003, 58, 603–611.
12. Oner, T.; Senturk, I.; Oner, G. An independent set of axioms of MV-algebras and solutions of the Set-Theoretical
Yang-Baxter equation. Axioms 2017, 6, 17
13. Meng, J.; Jun, Y.B. BCK-Algebras; Kyungmoon Sa Co.: Seoul, Korea, 1994.
14. Huang, Y.S. BCI-Algebra; Science Press: Beijing, China, 2006.
15. Smarandache, F. Neutrosophic set-a generalization of the intuitionistic fuzzy set. Int. J. Pure Appl. Math. 2005,
24, 287–297.

c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

180
axioms
Article
Neutrosophic Hough Transform
Ümit Budak 1 , Yanhui Guo 2, *, Abdulkadir Şengür 3 and Florentin Smarandache 4
1 Department of Electrical-Electronics Engineering, Engineering Faculty, Bitlis Eren University,
13000 Bitlis, Turkey; [email protected]
2 Department of Computer Science, University of Illinois at Springfield, One University Plaza,
Springfield, IL 62703, USA
3 Department of Electrical and Electronics Engineering, Technology Faculty, Firat University,
Elazig 23119, Turkey; [email protected]
4 Mathematics & Science Department, University of New Mexico, 705 Gurley Ave.,
Gallup, NM 87301, USA; [email protected]
* Correspondence: [email protected]; Tel.: +1-217-2068-170

Received: 22 November 2017; Accepted: 14 December 2017; Published: 18 December 2017

Abstract: Hough transform (HT) is a useful tool for both pattern recognition and image processing
communities. In the view of pattern recognition, it can extract unique features for description of
various shapes, such as lines, circles, ellipses, and etc. In the view of image processing, a dozen of
applications can be handled with HT, such as lane detection for autonomous cars, blood cell detection
in microscope images, and so on. As HT is a straight forward shape detector in a given image,
its shape detection ability is low in noisy images. To alleviate its weakness on noisy images and
improve its shape detection performance, in this paper, we proposed neutrosophic Hough transform
(NHT). As it was proved earlier, neutrosophy theory based image processing applications were
successful in noisy environments. To this end, the Hough space is initially transferred into the
NS domain by calculating the NS membership triples (T, I, and F). An indeterminacy filtering is
constructed where the neighborhood information is used in order to remove the indeterminacy in
the spatial neighborhood of neutrosophic Hough space. The potential peaks are detected based on
thresholding on the neutrosophic Hough space, and these peak locations are then used to detect the
lines in the image domain. Extensive experiments on noisy and noise-free images are performed in
order to show the efficiency of the proposed NHT algorithm. We also compared our proposed NHT
with traditional HT and fuzzy HT methods on variety of images. The obtained results showed the
efficiency of the proposed NHT on noisy images.

Keywords: Hough transform; fuzzy Hough transform; neutrosophy theory; line detection

1. Introduction
The Hough transform (HT), which is known as a popular pattern descriptor, was proposed
in sixties by Paul Hough [1]. This popular and efficient shape based pattern descriptor was then
introduced to the image processing and computer vision community in seventies by Duda and Hart [2].
The HT converts image domain features (e.g., edges) into another parameter space, namely Hough
space. In Hough space, every points (xi , yi ) on a straight line in the image domain, corresponds a
point (ρi , θi ).
In the literature, there have been so many variations of Hough’s proposal [3–14]. In [3],
Chung et al. proposed a new HT based on affine transformation. Memory-usage efficiency was
provided by proposed affine transformation, which was in the center of Chung’s method. Authors
targeted to detect the lines by using slope intercept parameters in the Hough space. In [4], Guo et al.
proposed a methodology for an efficient HT by utilizing surround-suppression. Authors introduced a

Axioms 2017, 6, 35; doi:10.3390/axioms6040035 181 www.mdpi.com/journal/axioms


Axioms 2017, 6, 35

measure of isotropic surround-suppression, which suppresses the false peaks caused by the texture
regions in Hough space. In traditional HT, image features are voted in the parameter space in order
to construct the Hough space. Thus, if an image has more features (e.g., texture), construction of the
Hough space becomes computationally expensive. In [5], Walsh et al. proposed the probabilistic HT.
Probabilistic HT was proposed in order to reduce the computation-cost of the traditional HT by using
a selection step. The selection step in probabilistic HT approach just considered the image features that
contributed the Hough space. In [6], Xu et al. proposed another probabilistic HT algorithm, called
randomized HT. Authors used a random selection mechanism to select a number of image domain
features in order to construct the Hough space parameters. In other words, only the hypothesized
parameter set was used to increment the accumulator. In [7], Han et al. used the fuzzy concept in HT
(FHT) for detecting the shapes in noisy images. FHT enables detecting shapes in noisy environment
by approximately fitting the image features avoiding the spurious detected using the traditional HT.
The FTH can be obtained by one-dimensional (1-D) convolution of the rows of the Hough space. In [8],
Montseny et al. proposed a new FHT method where edge orientations were considered. Gradient
vectors were considered to enable edge orientations in FHT. Based on the stability properties of
the gradient vectors, some relevant orientations were taken into consideration, which reduced the
computation burden, dramatically. In [9], Mathavan et al. proposed an algorithm to detect cracks in
pavement images based on FHT. Authors indicated that due to the intensity variations and texture
content of the pavement images, FHT was quite convenient to detect the short lines (cracks). In [10],
Chatzis et al. proposed a variation of HT. In the proposed method, authors first split the Hough space
into fuzzy cells, which were defined as fuzzy numbers. Then, a fuzzy voting procedure was adopted
to construct the Hough domain parameters. In [11], Suetake et al. proposed a generalized FHT (GFHT)
for arbitrary shape detection in contour images. The GFHT was derived by fuzzifying the vote process
in the HT. In [12], Chung et al. proposed another orientation based HT for real world applications.
In the proposed method, authors filtered out those inappropriate edge pixels before performing their
HT based methodology. In [13], Cheng et al. proposed an eliminating- particle-swarm-optimization
(EPSO) algorithm to reduce the computation cost of HT. In [14], Zhu et al. proposed a new HT for
reducing the computation complexity of the traditional HT. The authors called their new method as
Probabilistic Convergent Hough Transform (PCHT). To enable the HT to detect an arbitrary object,
the Generalized Hough Transform (GHT) is the modification of the HT using the idea of template
matching [15]. The problem of finding the object is solved by finding the model’s position, and the
transformation’s parameter in GHT.
The theory of neutrosophy (NS) was introduced by Smarandache as a new branch of
philosophy [16,17]. Different from fuzzy logic, in NS, every event has not only a certain degree of truth,
but also a falsity degree and an indeterminacy degree that have to be considered independently from
each other [18]. Thus, an event or entity {A} is considered with its opposite {Anti-A} and the neutrality
{Neut-A}. In this paper, we proposed a novel HT algorithm, namely NHT, to improve the line detection
ability of the HT in noisy environments. Specifically, the neutrosophic theory was adopted to improve
the noise consistency of the HT. To this end, the Hough space is initially transferred into the NS
domain by calculating the NS memberships. A filtering mechanism is adopted that is based on the
indeterminacy membership of the NS domain. This filter considers the neighborhood information
in order to remove the indeterminacy in the spatial neighborhood of Hough space. A peak search
algorithm on the Hough space is then employed to detect the potential lines in the image domain.
Extensive experiments on noisy and noise-free images are performed in order to show the efficiency of
the proposed NHT algorithm. We further compare our proposed NHT with traditional HT and FHT
methods on a variety of images. The obtained results show the efficiency of the proposed NHT on
both noisy and noise-free images.
The rest of the paper is structured as follows: Section 2 briefly reviews the Hough transform and
fuzzy Hough transform. Section 3 describes the proposed method based on neutrosophic domain

182
Axioms 2017, 6, 35

images and indeterminacy filtering. Section 4 describes the extensive experimental works and results,
and conclusions are drawn in Section 5.

2. Previous Works
In the following sub-sections, we briefly re-visit the theories of HT and FHT. The interested
readers may refer to the related references for more details about the methodologies.

2.1. Hough Transform


As it was mentioned in the introduction section, HT is a popular feature extractor for image based
pattern recognition applications. In other definitions, HT constructs a specific parameter space, called
Hough space, and then uses it to detect the arbitrary shapes in a given image. The construction of the
Hough space is handled by a voting procedure. Let us consider a line given in an image domain as;

y = kx + b (1)

where k is the slope and b is the intercept. Figure 1 depicts such a straight line with ρ and θ in image
domain. In Equation (1), if we replace (k, b) with (ρ, θ ), then we then obtain the following equation;
 
cosθ ρ
y= − x+ (2)
sinθ sinθ

θ x
0

Figure 1. A straight line in an image domain.

If we re-write the Equation (2), as shown in Equation (3), the image domain data can be represented
in the Hough space parameters.
ρ = xcosθ + ysinθ (3)

Thus, a point in the Hough space can specify a straight line in the image domain. After Hough
transform, an image I in the image domain is transferred into Hough space, which is denoted as IHT .

2.2. Fuzzy Hough Transform


Introducing fuzzy concept into HT is arisen due to the lines that are not straight in the image
domain [19]. In other words, traditional HT only considers the pixels that align on a straight line and
non-straight lines detection with HT becomes challenging. Non-straight lines generally occur due to the
noise or discrepancies in pre-processing operations, such as thresholding or edge detection. FHT aims
to alleviate this problem by applying a strategy. In this strategy, closer the pixel to any line contributes
more to the corresponding accumulator array bin. As it was mentioned in [7], the application of the
FHT can be achieved by two successive steps. In the first step, the traditional HT is applied to obtain

183
Axioms 2017, 6, 35

the Hough parameter space. In the second step, a 1-D Gaussian function is used to convolve the rows
of the obtained Hough parameter space. The 1-D Gaussian function is given as following;

⎨ 2
− t2
e σ ,i f t < R
f (t) = (4)
⎩ 0,otherwise

where R = σ defining the width of the 1-D Gaussian function.

3. Proposed Method

3.1. Neutrosophic Hough Space Image


An element in NS is defined as: let A = { A1 , A2 , . . . ., Am } as a set of alternatives in neutrosophic
set. The alternative Ai is {T( Ai ), I( Ai ), F( Ai )}/Ai , where T( Ai ), I( Ai ), and F( Ai ) are the membership
values to the true, indeterminate, and false set.
A Hough space image IHT is mapped into neutrosophic set domain, denoted as INHT ,
which is interpreted using THT , IHT , and FHT . Given a pixel P(ρ, θ ) in IHT , it is interpreted
as PNHT (ρ, θ ) = { THT (ρ, θ ), IHT (ρ, θ ), FHT (ρ, θ ) }. THT (ρ, θ ), IHT (ρ, θ ), and FHT (ρ, θ ) represent the
memberships belonging to foreground, indeterminate set, and background, respectively [20–27].
Based on the Hough transformed value and local neighborhood information, the true membership
and indeterminacy membership are used to describe the indeterminacy among local neighborhood as:

g(ρ, θ ) − gmin
THT (ρ, θ ) = (5)
gmax − gmin

Gd(ρ, θ ) − Gdmin
IHT (ρ, θ ) = (6)
Gdmax − Gdmin
FHT (ρ, θ ) = 1 − THT (ρ, θ ) (7)

where g(ρ, θ ) and Gd(ρ, θ ) are the HT values and its gradient magnitude at the pixel of PNHT (ρ, θ ) on
the image INHT .

3.2. Indeterminacy Filtering


A filter is defined based on the indeterminacy and is used to remove the effect of indeterminacy
information for further segmentation, whose the kernel function is defined as follows:
 
1 ρ2 + θ 2
GI (ρ, θ ) = exp − (8)
2πσ2I 2σ2I

where σI is the standard deviation value where is defined as a function f (·) that is associated to the
indeterminacy degree. When the indeterminacy level is high, σI is large and the filtering can make the
current local neighborhood more smooth. When the indeterminacy level is low, σI is small and the
filtering takes a less smooth operation on the local neighborhood.
An indeterminate filtering is taken on THT (ρ, θ ) to make it more homogeneous.

θ +m/2 ρ+m/2
T  HT (ρ, θ ) = THT (ρ, θ ) ⊕ G I (u, v) = ∑ ∑ THT (ρ − u, θ − v) G I (u, v) (9)
v=θ −m/2 u=ρ−m/2

 
1 u2 + v2
G I (u, v) = exp − (10)
2πσI2 2σI2

σI (ρ, θ ) = f ( IHT (ρ, θ )) = aIHT (ρ, θ ) + b (11)

184
Axioms 2017, 6, 35

where T  HT is the indeterminate filtering result, a and b are the parameters in the linear function to
transform the indeterminacy level to standard deviation value.

3.3. Thresholding Based on Histogram in Neutrosophic Hough Image


After indeterminacy filtering, the new true membership set T  HT in Hough space image become
homogenous, and the clusters with high HT values become compact, which is suitable to be identified.
A thresholding method is used on T  HT to pick up the clustering with high HT values, which respond to
the lines in original image domain. The thresholding value is automatic determined by the maximum
peak on this histogram of the new Hough space image.

1 THT (ρ, θ ) > ThHT
TBW (ρ, θ ) = (12)
0 otherwise

where ThHT is the threshold value that is obtained from the histogram of T  HT .
In the binary image after thresholding, the object regions are detected and the coordinators of
their center are identified as the parameters (ρ, θ ) of the detected lines. Finally, the detected lines are
found and recovered in the original images using the values of ρ and θ.

4. Experimental Results
In order to specify the efficiency of our proposed NHT, we conducted various experiments on a
variety of noisy and noise-free images. Before illustration of the obtained results, we opted to show the
effect of our proposed NHT on a synthetic image. All of the necessary parameters for NHT were fixed
in the experiments where the sigma parameter and the window size of the indeterminacy filter were
chosen as 0.5 and 7, respectively. The value of a and b in Equation (11) are set as 0.5 by the trial and
error method.
In Figure 2, a synthetic image, its NS Hough space, detected peaks on NS Hough space and the
detected lines illustration are illustrated below.
As seen in Figure 2a, we constructed a synthetic image, where four dotted lines were crossed in
the center of image. The lines are designed not to be perfectly straight in order to test the proposed
NHT. Figure 2b shows the NS Hough space of the input image. As seen in Figure 2b, each image point
in the image domain corresponds to a sinus curve in NS Hough space, and the intersections of these
sinus curves indicate the straight lines in the image domain. The intersected curves regions, as shown
in Figure 2b, were detected based on the histogram thresholding method and the exact location of the
peaks was determined by finding the center of the region. The obtained peaks locations are shown
in Figure 2c. The obtained peak locations were then converted to lines and the detected lines were
superimposed in the image domain, as shown in Figure 2d.
We performed the above experiment one more time when the input synthetic image was corrupted
with noise. With this experiment, we can investigate the behavior of our proposed NHT on noisy
images. To this end, the input synthetic image was degraded with a 10% salt & pepper noise.
The degraded input image can be seen in Figure 3a. The corresponding NS Hough space is depicted
in Figure 3b. As seen in Figure 3b, the NS Hough space becomes denser when compared with the
Figure 2b, because the all of the noise points contributed the NS Hough space. The thresholded
the NS Hough space is illustrated in Figure 3c, where four peaks are still visible. Finally, as seen
in Figure 3d, the proposed method can detect the four lines correctly in a noisy image. We further
experimented on some noisy and noise-free images, and the obtained results were shown in Figure 4.
While the images in the Figure 4a column show the input noisy and noise-free images, in Figure 4b
column, we give the obtained results. As seen in the results, the proposed NHT is quite effective for
both noisy and noise-free images. All of the lines were detected successfully. In addition, as shown in
the first row of Figure 4, the proposed NHT could detect the lines that were not straight perfectly.

185
Axioms 2017, 6, 35

(a) (b)

(c) (d)

Figure 2. Application of the neutrosophic Hough transform (NHT) on a synthetic image, (a) input
synthetic image; (b) neutrosophy (NS) Hough space; (c) Detected peaks in NS Hough space; and, (d)
Detected lines.

(a) (b)

(c) (d)

Figure 3. Application of the NHT on a noisy synthetic image, (a) input noisy synthetic image; (b) NS
Hough space; (c) Detected peaks in NS Hough space; and, (d) Detected lines.

186
Axioms 2017, 6, 35

(a) (b)

Figure 4. Application of the NHT on a noisy synthetic image, (a) input noisy and noise-free synthetic
image; (b) Detected lines with proposed NHT.

As the proposed NHT was quite good in detection of the lines in both noise and noise-free
synthetic images, we performed some experiments on gray scale real-world images. The obtained
results were indicated in Figure 5. Six images were used and obtained lines were superimposed on the
input images. As seen from the results, the proposed NHT yielded successful results. For example,
for the images of Figure 5a,c–f, the NHT performed reasonable lines. For the chessboard image
(Figure 5b), only few lines were missed.

187
Axioms 2017, 6, 35

We further compared the proposed NHT with FHT and the traditional HT on a variety of noisy
images and obtained results were given in Figure 6. The first column of Figure 6 shows the HT
performance, the second column shows the FHT achievements, and final column shows the obtained
results with proposed NHT.

(a) (b) (c)

(d) (e) (f)

Figure 5. Various experimental results for gray scale real-world images (a–f).

With visual inspection, the proposed method obtained better results than HT and FHT. Generally,
HT missed one or two lines in the noisy environment. Especially for the noisy images that were given
in the third row of Figure 6, HT just detected one line. FHT generally produced better results than
HT. FHT rarely missed lines but frequently detected the lines in the noisy images due to its fuzzy
nature. In addition, the detected lines with FHT were not on the ground-truth lines, as seen in first and
second rows of Figure 6. NHT detected all of the lines in all noisy images that were given Figure 6.
The detected lines with NHT were almost superimposed on the ground-truth lines as we inspected it
visually. In order to evaluate the comparison results quantitatively, we computed the F-measure values
for each method, which is defined as:

2 × ( precision × recall )
F-measure = (13)
precision + recall

where precision is the number of correct results divided by the number of all returned results. The recall
is the number of correct results divided by the number of results that should have been returned.
To this end, the detected lines and the ground-truth lines were considered. A good line detector
produces an F-measure percentage, which is close to 100%, while a poor one produces an F-measure
value that is closer to 0%. As Figure 6 shows the comparison results, the corresponding F-measure
values were tabulated in Table 1. As seen in Table 1, the highest average F-measure value was obtained
by the proposed NHT method. The second highest average F-measure value was obtained by FHT.
The worst F-measure values were produced by HT method.

188
Axioms 2017, 6, 35

Table 1. F-measure percentages for compared methods.

Input Image HT FHT NHT


First row (Figure 6) 81.09 85.96 89.50
Second row (Figure 6) 71.87 86.75 98.28
Third row (Figure 6) 32.85 42.58 59.96
Average 61.94 71.76 82.58

(a) (b) (c)

Figure 6. Comparison of NHT with Hough transform (HT) and FHT on noisy images (a) HT results;
(b) FHT results and (c) NHT results.

We also experimented on various noisy real-world images and compared the obtained results with
HT and FHT achievements. The real-world images were degraded with a 10% salt & pepper noise.
The obtained results were given in Figure 7. The first, second, and third columns of Figure 7 show the HT,
FHT, and NHT achievement, respectively. In addition, in the last column of Figure 7, the running
times of each method for each image were given for comparison purposes.
As can be seen in Figure 7, the proposed NHT method is quite robust against noise and can able to
find the most of the true lines in the given images. In addition, with a visual inspection, the proposed
NHT achieved better results than the compared HT and FHT. For example, for the image, which is
depicted in the first row of Figure 7, the NHT detected all of the lines. FHT almost detected the lines
with error. However, HT only detected just one line and missed the other lines. Similar results were
also obtained for the other images that were depicted in the second and third rows of the Figure 7.

189
Axioms 2017, 6, 35

In addition, for a comparison of the running times, it is seen that there is no significant differences
between the compared methods running times.

Some noisy real-world images Running Times

HT: 0.265 secs.


FHT: 0.320 secs.
NHT: 0.329 secs.

HT: 0.815 secs.


FHT: 0.944 secs.
NHT: 1.510 secs.

HT: 1.080 secs.


FHT: 1.177 secs.
NHT: 0.214 secs.

(a) (b) (c)

Figure 7. Comparison of NHT with HT and FHT on noisy images (a) HT results; (b) FHT results; and,
(c) NHT results.

5. Conclusions
In this paper, a novel Hough transform, namely NHT, was proposed, which uses the NS theory
in voting procedure of the HT algorithm. The proposed NHT is quite efficient in the detection of the
lines in both noisy and noise-free images. We compared the performance achievement of proposed
NHT with traditional HT and FHT methods on noisy images. NHT outperformed in line detection
on noisy images. In future works, we are planning to extend the NHT on more complex images,
such natural images, where there are so many textured regions. In addition, Neutrosophic circular HT
will be investigated in our future works.

Author Contributions: Ümit Budak, Yanhui Guo, Abdulkadir Şengür and Florentin Smarandache conceived and
worked together to achieve this work.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Hough, P.V.C. Method and Means for Recognizing. US Patent 3,069,654, 25 March 1960.
2. Duda, R.O.; Hart, P.E. Use of the Hough transform to detect lines and curves in pictures. Commun. ACM
1972, 15, 11–15. [CrossRef]
3. Chung, K.L.; Chen, T.C.; Yan, W.M. New memory- and computation-efficient Hough transform for detecting
lines. Pattern Recognit. 2004, 37, 953–963. [CrossRef]
4. Guo, S.; Pridmore, T.; Kong, Y.; Zhang, X. An improved Hough transform voting scheme utilizing surround
suppression. Pattern Recognit. Lett. 2009, 30, 1241–1252. [CrossRef]
5. Walsh, D.; Raftery, A. Accurate and efficient curve detection in images: The importance sampling Hough
transform. Pattern Recognit. 2002, 35, 1421–1431. [CrossRef]

190
Axioms 2017, 6, 35

6. Xu, L.; Oja, E. Randomized Hough transform (RHT): Basic mechanisms, algorithms, and computational
complexities. CVGIP Image Underst. 1993, 57, 131–154. [CrossRef]
7. Han, J.H.; Koczy, L.T.; Poston, T. Fuzzy Hough transform. Patterns Recognit. Lett. 1994, 15, 649–658.
[CrossRef]
8. Montseny, E.; Sobrevilla, P.; Marès Martí, P. Edge orientation-based fuzzy Hough transform (EOFHT).
In Proceedings of the 3rd Conference of the European Society for Fuzzy Logic and Technology, Zittau,
Germany, 10–12 September 2003.
9. Mathavan, S.; Vaheesan, K.; Kumar, A.; Chandrakumar, C.; Kamal, K.; Rahman, M.; Stonecliffe-Jones, M.
Detection of pavement cracks using tiled fuzzy Hough transform. J. Electron. Imaging 2017, 26, 053008.
[CrossRef]
10. Chatzis, V.; Ioannis, P. Fuzzy cell Hough transform for curve detection. Pattern Recognit. 1997, 30, 2031–2042.
[CrossRef]
11. Suetake, N.; Uchino, E.; Hirata, K. Generalized fuzzy Hough transform for detecting arbitrary shapes in a
vague and noisy image. Soft Comput. 2006, 10, 1161–1168. [CrossRef]
12. Chung, K.-L.; Lin, Z.-W.; Huang, S.-T.; Huang, Y.-H.; Liao, H.-Y.M. New orientation-based elimination
approach for accurate line-detection. Pattern Recognit. Lett. 2010, 31, 11–19. [CrossRef]
13. Cheng, H.-D.; Guo, Y.; Zhang, Y. A novel Hough transform based on eliminating particle swarm optimization
and its applications. Pattern Recognit. 2009, 42, 1959–1969. [CrossRef]
14. Zhu, L.; Chen, Z. Probabilistic Convergent Hough Transform. In Proceedings of the International Conference
on Information and Automation, Changsha, China, 20–23 June 2008.
15. Ballard, D.H. Generalizing the Hough Transform to Detect Arbitrary Shapes. Pattern Recognit. 1981, 13,
111–122. [CrossRef]
16. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic; ProQuest Information & Learning;
American Research Press: Rehoboth, DE, USA, 1998; 105p.
17. Smarandache, F. Introduction to Neutrosophic Measure, Neutrosophic Integral and Neutrosophic Probability; Sitech
Education Publishing: Columbus, OH, USA, 2013; 55p.
18. Smarandache, F. A Unifying Field in Logics Neutrosophic Logic. In Neutrosophy, Neutrosophic Set, Neutrosophic
Probability, 3rd ed.; American Research Press: Rehoboth, DE, USA, 2003.
19. Vaheesan, K.; Chandrakumar, C.; Mathavan, S.; Kamal, K.; Rahman, M.; Al-Habaibeh, A. Tiled fuzzy Hough
transform for crack detection. In Proceedings of the Twelfth International Conference on Quality Control by
Artificial Vision (SPIE 9534), Le Creusot, France, 3–5 June 2015.
20. Guo, Y.; Şengür, A. A novel image segmentation algorithm based on neutrosophic filtering and level set.
Neutrosophic Sets Syst. 2013, 1, 46–49.
21. Guo, Y.; Xia, R.; Şengür, A.; Polat, K. A novel image segmentation approach based on neutrosophic c-means
clustering and indeterminacy filtering. Neural Comput. Appl. 2017, 28, 3009–3019. [CrossRef]
22. Karabatak, E.; Guo, Y.; Sengur, A. Modified neutrosophic approach to color image segmentation. J. Electron.
Imaging 2013, 22, 013005. [CrossRef]
23. Guo, Y.; Sengur, A. A novel color image segmentation approach based on neutrosophic set and modified
fuzzy c-means. Circuits Syst. Signal Process. 2013, 32, 1699–1723. [CrossRef]
24. Sengur, A.; Guo, Y. Color texture image segmentation based on neutrosophic set and wavelet transformation.
Comput. Vis. Image Underst. 2011, 115, 1134–1144. [CrossRef]
25. Guo, Y.; Şengür, A.; Tian, J.W. A novel breast ultrasound image segmentation algorithm based on
neutrosophic similarity score and level set. Comput. Methods Programs Biomed. 2016, 123, 43–53. [CrossRef]
[PubMed]
26. Guo, Y.; Sengur, A. NCM: Neutrosophic c-means clustering algorithm. Pattern Recognit. 2015, 48, 2710–2724.
[CrossRef]
27. Guo, Y.; Şengür, A. A novel image segmentation algorithm based on neutrosophic similarity clustering.
Appl. Soft Comput. 2014, 25, 391–398. [CrossRef]

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

191
MDPI
St. Alban-Anlage 66
4052 Basel
Switzerland
Tel. +41 61 683 77 34
Fax +41 61 302 89 18
www.mdpi.com

Axioms Editorial Office


E-mail: [email protected]
www.mdpi.com/journal/axioms
MDPI
St. Alban-Anlage 66
4052 Basel
Switzerland
Tel: +41 61 683 77 34
Fax: +41 61 302 89 18
www.mdpi.com ISBN 978-3-03897-289-1

You might also like