0% found this document useful (0 votes)
68 views

On The Construction of Perfect Keyword Secure PEKS Scheme

Assessment of student performance is one of the most important tasks in the educational process. Thus, formulating questions and creating tests takes the instructor a lot of time and effort. However, the time spent for learning acquisition and on exam preparation could be utilized in better ways. With the technical development in representing and linking data, ontologies have been used in academic fields to represent the terms in a field by defining concepts and categories classifies the subject

Uploaded by

ijcijournal1821
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

On The Construction of Perfect Keyword Secure PEKS Scheme

Assessment of student performance is one of the most important tasks in the educational process. Thus, formulating questions and creating tests takes the instructor a lot of time and effort. However, the time spent for learning acquisition and on exam preparation could be utilized in better ways. With the technical development in representing and linking data, ontologies have been used in academic fields to represent the terms in a field by defining concepts and categories classifies the subject

Uploaded by

ijcijournal1821
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.

2, April 2024

On the construction of Perfect Keyword Secure PEKS


scheme

Indranil Ghosh Ray


Queen’s University Belfast, UK

Abstract. With the growing popularity of cloud computing, searchable encryption has be-
come centre of attraction to enhance privacy and usability of the shared data. First searchable
encryption scheme under the public key setting was proposed by Bonah et al. which is known
as PEKS. In the PEKS scheme, one can easily link between cipher text and the trapdoor.
In Information Sciences 2017 paper, Huang et al. proposed a public key SE scheme. In this
scheme, encryption of a document or keyword requires the secret key of the data sender.
The data sender generates ciphertexts, and upload them onto the cloud server. The data
receiver generates trapdoors depending upon the public key of the sender and its own secret
key. Thus, the PEKS scheme of Huang et al. circumvents the above attack by linking the
ciphertext and the trapdoor to the key of the sender. However no work is available in the
literature to stop attacks against linking user key and cipher text and server key and ci-
pher text. In this paper we address these issues. We formalize the two new security notions,
namely UKI-security and SKI-security. We have shown that our scheme is secure under these
newly introduced security notions.

1 Introduction
The advent of cloud computing has enabled users to outsource storage to cloud service
providers. These providers offer affordable and reliable storage of client data. In addition,
the cloud service provider can also perform bespoke processing on any data stored in its
server by a client. Any data stored by a client on a cloud server is exposed to the risk
of privacy breach. Thus, clients are more inclined to encrypt every piece of information
before storing them onto a cloud server.
Encryption of stored information impedes the ability of the cloud server to perform
processing of the data and drastically affects the usability of cloud services. Searchable
encryption schemes were proposed to alleviate this issue. This technique enable a cloud
server to search for a keyword in an encrypted document stored in its memory. For this
purpose, it requires the client to issue a trapdoor corresponding to the keyword to be
searched. In public key searchable encryption, the public encryption key of the data owner
is used to encrypt a document, whereas, the secret key is used to generate a trapdoor. The
first public key SE scheme was proposed by Boneh et al. in [4]. This scheme and almost all
other public key SE schemes allow anyone having the knowledge of the public encryption
key of the data owner to encrypt a document. So, given a trapdoor, the cloud server can
encrypt all keywords and try to match them one by one with the trapdoor using the search
algorithm. This way, the cloud server can identify the trapdoor in all circumstances.
In [14], Huang et al. proposed a public key SE scheme. In this scheme, encryption
of a document or keyword requires the secret key of the data sender. This data sender
generates ciphertexts, and upload them onto the cloud server. The data receiver generates
trapdoors depending upon the public key of the sender and its own secret key. Thus, the
PEKS scheme of Huang et al. circumvents the above attack by linking the ciphertext and
the trapdoor to the key of the sender. The cloud server who does not know the secret key
of the sender cannot produce ciphertexts by itself. Thus, it cannot perform a brute force
search against a trapdoor.
Bibhu Dash et al: NLPAI, IOTBC -2024
pp. 67-86, 2024. IJCI – 2024 DOI:10.5121/ijci.2024.130206
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
Our contribution:
1. Unlike PEKS of [6], where full linkability is possible, and PSEKS scheme of [14], where
partial linkability, we achieve full unlinkability in terms of user key undistinguishability
(UKI) and server key undistinguishability (SKI).
2. We provide formal definition of UKI security and SKI security.
3. We provide formal security proof for UKI security and SKI security under game based
modeling.
4. We implement a prototype of our scheme using java jpair library and validated against
TIMIT dataset [1] which is presented in the full version of the paper. Our empirical
results indicate that our scheme is feasible for practical use.

Organization The rest of the paper is organised as follows. In Section 2, we discuss


previous works on phrase search.In 3 we discuss the crucial definitions that are needed. In
Section 4, we describe our scheme in detail. In 5 we discuss user key indistinguishability.
In Section 6, we discuss server key indistinguishability. In Section 7, we study the overhead
associated with our scheme and provide a comparative study. We conclude the paper in
Section 8.

2 Related Work

Searchable encryption is a newly emerging technology and has drawn attention of many
leading research groups leading to considerable amount of research work in this domain
[2,5,7–9,12,13,15–25] The concept of search aware encryption was first introduced by Song
et al. [19]. Searchable encryption schemes susceptible to leakage of information related to
access pattern and search pattern, making the scheme vulnerable to statistical attacks [12].
Curtmola et al. [11,12] proposed two schemes for keyword search in the symmetric setting.
In the asymmetric setting, the entity who generates the ciphertext can delegate the search
to any third party. This was introduced by Boneh et al. in [5].
In [15] Kamara et al. proposed a dynamic symmetric searchable encryption scheme
which was improved by Stefanov et al. in [20]. In [8] Cash et al. proposed an encryption
scheme to effectively perform dynamic search on large databases. Wang et al. proposed a
fuzzy search encryption scheme that supports multiple keyword search, where the authors
basically exploits a locality-sensitive hashing technique which can tolerate typos [24].
In [25] Zittrower et al. proposed the first searchable encryption for phrase search.
Their construction reveals too much information to the server. In [17] it is shown that the
server can learn the frequency of distinct keywords in different documents, and can know
the ciphertext of likely keywords. Tang et al. proposed another construction in [23], which
requires an extra lookup table, and also requires the client to store a dictionary of keywords
for making search possible. Kissel et al. [16] proposed a two-phrase verifiable searchable
encryption scheme. Naveed et al. in [18] proved that the leakage of the number of searched
documents can be reduced by storing documents in blocks. Chase et al. [10] proposed
an encryption scheme which is capable of string search. In [13], Ray et al. presented a
symmetric key searchable encryption scheme that supports phrase search. In this scheme
the position of the phrase in the sentence are leaked after the search. In [2], Bag et al.
improved this by proposing access pattern secure encryption scheme where an honest-but-
curious cloud server could learn nothing about even after the search.
Efficient symmetric searchable encryption scheme to support boolean search queries
was proposed by Cash et al. in [9]. Li et al. [17] proposed a symmetric searchable encryption
scheme that supports encrypted phrase searches known as LPSSE whose security is based
68
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
on the non-adaptive security definition of Curtmola et al [11]. Cao et al. [7] proposed
a privacy-preserving multi-keyword ranked search scheme using symmetric encryption.
Sun et al. [22] proposed an efficient privacy-preserving multi-keyword supporting cosine
similarity measurement. Sun et al. [21] constructed an encryption scheme that supports
multi-user boolean queries.
The scheme of Bonah et al. [4] is susceptible to attacking liveraging from linking
between ciphertext and the trapdoor. Thus attacker can inject ciphertext of his choice and
with such link, can break the trapdoor security with overwhelming probability. The PEKS
scheme of Huang et al. [14] circumvents the above attack by linking the ciphertext and
the trapdoor to the key of the sender only. However existing schemes are still susceptible
to attacks against linking user key and cipher text and server key and cipher text. In this
paper we address these issues.

3 Definition and Preliminaries

Definition 1. (Bilinear Pairing) [6] Let ê : G1 × G1 → GT be a bilinear pairing,


mapping from groups G1 and G1 to GT , where G1 and G1 are cyclic groups of the same
prime order p. It has the following properties.

– Bilinearity. For any g, h ∈ G1 and a, b ∈ Z, ê(g a , hb ) = ê(g, h)a b.


– Non-degeneracy. For any generator g ∈ G1 , ê(g, g) ∈ GT is a generator of GT .
– Computability. For any g, h ∈ G1 , there is an efficient algorithm to compute ê(g, h).

Definition 2. (BDH) Given G, G1 , e : G × G → G1 , let us define a security experiment


ExpBDH
A (λ) as follows.

ExpBDH
A (λ)
$
g←
−G
$ $ $
a←− Zp , b ←
− Zp , c ←
− Zp
Ω0 ← e(g, g) abc
$
Ω1 ←
− G1
$
d← − {0, 1}

d ← A(g, g a , g b , g c , Ωd )
return d = d′

BDH (λ) is negligible, where


For any probabilistic polynomial time adversary A, AdvA
BDH BDH 1
AdvA (λ) = P r[ExpA (λ) = 1] − 2

The decision linear (DLIN) problem is defined as follows.

Definition 3. [3] (DLIN) Given the group G, let us define the following security exper-
iment ExpDLIN
A (λ).
69
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

ExpDLIN
A (λ)
$
g←
−G
$ $
a←
− Zp , b ←
− Zp
$ $
e←
− Zp , f ←
− Zp
$
− g e+f , Ω1 ←
Ω0 ← −G
$
d← − {0, 1}
d′ ← A(g, g a , g b , g ae , g f /b , Ωd )
return d = d′
DLIN (λ).
The advantage of an adversary A, against the security experiment is given by AdvA
We define
DLIN
 1
(λ) = P r ExpDLIN

AdvA A (λ) = 1 −
2
DLIN (λ) ≤ negl(λ).
For any PPT adversary A, AdvA

4 The Scheme

This scheme is an improvement over the searchable encryption scheme described in [14].

4.1 Construction
In this section, we discuss the construction of the PAEKS-II scheme. The PAEKS-II scheme
is comprised of seven algorithms, namely, Setup(λ), SKeyGenR (param), SKeyGenC (param),
SKeyGenR (param, P kC ), Enc(param, w, SkS , P kR ),
T rapdoor(param, w, SkR , P kR , P kC , P kS , P kT ) and T est(param, C, Tw , P kS , P kR , SkC ).
These algorithms are described below.

1. Setup(λ): This algorithm takes as input the security parameter λ, and outputs true
groups G, and G1 of order p. It also returns a random generator g of G, and a hash
function H : {0, 1}∗ → G. Let us denote param = (G, G1 , p, g, e, H).
2. KeyGenR (param): This algorithm takes as input the public parameter param, and
it selects a random a ∈R Zp , and sets (SkR , P kR ) = (a, g a ). This algorithm returns
(SkR , P kR ).
3. KeyGenC (param): This algorithm takes as input the public parameter param, and it
selects a random c ∈R Zp , and sets (SkC , P kC ) = (c, g c ). This algorithm generates a
NIZK proof of knowledge of SkC = logg P kC . This algorithm returns (SkC , P kC , ΠC ).
4. KeyGenS (param, P kC ): This algorithm takes as input the public parameter param,
and the public key of the cloud P kC . It selects a random b ∈ Zp , and sets (SkS , P kS , P kT ) =
(b, g b , P kC
b ). This algorithm returns (Sk , P k , P k ).
S S T
5. Enc(param, w, SkS , P kR ): This algorithm takes as input the public parameter param,
a keyword w, the secret key SkS of the data sender, and the public key P kR of the
data receiver. The algorithm selects a random r ∈R Zp , and computes C1 = H(w)SkS ·
g r , C2 = P kR r . This algorithm returns the ciphertext C = (C , C ).
1 2
6. T rapdoor(param, w, SkR , P kR , P kC , P kS , P kT ): This algorithm takes as input the pub-
lic parameter param, a keyword w the secret key SkR of the receiver, the public keys
P kC , and P kS of the cloud and the data sender, and the public trapdoor generation key
P kT . It selects random R ∈R G, and computes Tw = e(P kS , H(w)SkR R), e(P kT , R) .


The algorithm returns the trapdoor Tw .


70
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

7. T est(param, C, Tw , P kS , P kR , SkC ): This algorithm takes as input the public param-


eter param, the ciphertext C = (C1 , C2 ), the trapdoor Tw = (A, B), the public keys
P kS , P kR of the sender and the receiver, and the secret key SkC of the cloud. This
algorithm returns 1 if (A/B 1/SkC ) · e(C2 , g) = e(C1 , P kR ), or returns 0 otherwise.

4.2 Correctness
We show that given a ciphertext, and a correctly generated trapdoor, the T est(·) algorithm
will return either 1 if the ciphertext and the trapdoor correspond to the same keyword, or
0 otherwise. The ciphertext C = (C1 , C2 ), where C1 = H(w)SkS · g r , and C2 = P kR r.

Again, the trapdoor is Tw = e(P kS , H(w) SkR R), e(P k , R) .


T
Sk Sk · e(C2 , g) = e(P kS , H(w)SkR R)/e(P kS , R) ·
 
Now, e(P kS , H(w) R R)/e(P kT , R) C

e(P kR r , g) = e(P k , H(w)SkR )·e(P k , g r ) = e(P k , H(w)SkS )·e(P k , g r ) = e(P k , H(w)SkS g r ) =


S R R R R
e(P kR , C1 ).
Now, let us assume that the ciphertext correspond to the word w, and the trapdoor cor-
respond to the word w′ . That is, the ciphertext is C = (C1 , C2 ) = (H(w)SkS g r , P kR r ), and

the trapdoor is Tw = e(P kS , H(w ) ′ Sk R ′


R), e(P kT , R) . As such, (e(P kS , H(w ) SkR R)/e(P kT , R)SkC )·
r ′ Sk r ′
e(P kR , g) = e(P kS , H(w ) R )·e(P kR , g) = e(P kR , H(w ) S )·e(P kR , g ) = e(P kR , H(w′ )SkS ·
Sk r

g r ) ̸= e(C2 , g). Thus, our scheme is correct.

4.3 Keyword Privacy


The first security notion which we discuss is called ‘keyword privacy’. This security notion
enables us to understand whether a PEKS scheme can resist leakage of information about
the keyword through ciphertexts and trapdoors. In other words, a PEKS scheme will be
secure according to this notion if the adversary cannot extract any useful keyword-related
information from a pair of trapdoor and ciphertext corresponding to the same keyword.
We formally define this notion below.
Let us consider the following security experiment ExpKW
A
P (λ). We denote by Adv KW P (λ),
A
the advantage of an adversary A = (A0 , A1 , A2 ) against the experiment ExpKW A
P (λ). We

define AdvA KW P (λ) = P r[ExpKW P (λ) = 1] − 1 . A PEKS scheme is secure in accordance


A 2
with the keyword privacy notion if for any PPT adversary A, AdvA KW P (λ) ≤ negl(λ).

ExpKW
A
P (λ)

(G, G1 , p, g, e, H) ← Setup(λ)
(SkR , P kR ) ← KeyGenR (G, G1 , p, g, e, H)
(st1 , P kC , ΠC ) ← AKeyGen
0
C
(G, G1 , p, g, e, H, P kR )
(SkS , P kS , P kT ) ← KeyGenS (G, G1 , p, g, e, H, P kC )
Enc(·),T rapdoor(·)
(w0∗ , w1∗ , Q̃, st2 ) ← A1 (st1 , P kS , P kT )
∗ ∗
If Enc(·, wi , ·) or T rapdoor(·, wi , ·) : i ∈ {0, 1} is queried by A0
return 0
$
d←− {0, 1}
Xi = Enc(·, wd∗ , ·), Yi = T rapdoor(·, wd∗ , ·) : i ∈ [1, Q̃]
X = {Xi : i ∈ [1, Q̃]}, Y = {Yi : i ∈ [1, Q̃]}
Enc(·),T rapdoor(·)
d′ ← A2 (st2 , X, Y )
If Enc(·, wi , ·) or T rapdoor(·, wi∗ , ·) : i ∈ {1, 2} is queried by A1

return 0
else return d = d′

71
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

In the above security experiment, the challenger first uses the Setup function to gen-
erate the public parameters. Then it generates the keys of the receiver, server, and the
sender. There is a two stage adversary A = (A0 , A1 ). Now, it invokes A0 with the public
parameters and the public keys. A0 is given access to the Encryption and Trapdoor oracles.
A0 outputs two keywords, such that A0 has not made an encryption or trapdoor query for
any of them. A0 also outputs an integer Q ∈ poly(λ). The challenger randomly picks one of
the keywords, and computes a ciphertext, and trapdoor for that particular keyword. Then
the challenger invokes A1 with the ciphertext and trapdoor. The adversary has to guess
which keyword the ciphertext and the trapdoor correspond to. If the adversary makes a
correct guess, it wins the experiment.
Here we provide a varint of DLIN defined in definition 3

Assumption 1 Given the group G, let us define the following security experiment ExpDLIN
A
1 (λ).

DLIN 1 (λ)
ExpA
$
g←
−G
$ $
a←
− Zp , b ←
− Zp
$ $
e←
− Zp , f ←
− Zp
$
− g f /b , Ω1 ←
Ω0 ← −G
$
d← − {0, 1}

d ← A(g, g a , g b , g ae , g e+f , Ωd )
return d = d′

DLIN 1 (λ).
The advantage of an adversary A, against the security experiment is given by AdvA
We define
DLIN 1
 1
(λ) = P r ExpDLIN 1

AdvA A (λ) = 1 −
2
DLIN 1 (λ) ≤ negl(λ).
For any PPT adversary A, AdvA

Lemma 1. Definition 3 implies assumption 1.

Proof. We show that if there exists an adversary A, against the security experiment
ExpDLIN
A
1 (λ), it could be used to construct another adversary B, against the security

experiment ExpDLIN B (λ). B works as follows. It receives as input g, g a , g ae , g b , g b/f , and


a challenge Ωd ∈ {g e+f , R}, where R is uniformly random in G. B invokes A with these
inputs: g, g a , g ae , g b , Ωd , g f /b . Here, the challenge is ωd = g e+f . Observe that, if Ωd = g e+f ,
then ωd = g f /b , alternatively, if Ωd is random, Ωd is also random. Hence, if A can identify
Ωd , B will be able to identify Ωd correctly. Hence, the lemma holds. ⊔

Assumption 2 Given G, G1 , e : G × G → G1 , for all probabilistic polynomial time adver-


sary A, the following probability is negligibly higher than 12 .

$
 
g←
−G
 $ $ $ $ 

 a←
− Zp , b ←
− Zp , e ←
− Zp , f ←
− Zp 

Pr  $
Ω0 ← g f /b , Ω1 ←
−G

 
$
 
 d← − {0, 1} 
d′ ← A(g, g a , g b , g ae , e(g, g)af , g e+f , Ωd )

72
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

Lemma 2. Assumption 1 implies assumption 2.

Proof. We show that if there exists an adversary A, against the assumption 2, it could
be used in the construction of another adversary B, against the assumption 1. B works as
follows: it receives as input g, g a , g b , g ae , g e+f , and the challenge Ωd ∈R {g f /b , R}, where R
is uniformly random in G. B computes e(g, g)a f as e(g a , g e+f )/e(g, g ae ). Now, B invokes
A(g, g a , g b , g ae , e(g, g)af , g e+f , Ωd ), and it returns what A returns. It is easy to see that
the lemma holds. ⊔

Assumption 3 Given G, G1 , e : G × G → G1 , for all probabilistic polynomial time adver-


sary A, the following probability is negligibly higher than 12 .

$
 
g←
−G
 $ $ $ $ 

 a←
− Zp , b ←
− Zp , k ←
− Zp , e ←
− Zp 

Pr  $
Ω0 ← g k , Ω1 ←
−G

 
$
 
 d← − {0, 1} 

d ← A(g, g , g , g ae , e(g, g)abk , g e+bk , Ωd )
a b

Lemma 3. Assumption 2 implies assumption 3.

Proof. One can reduce the former assumption to the latter one by setting k = f /b. Hence,
the lemma holds. ⊔

Assumption 4 Given G, G1 , e : G × G → G1 , for all probabilistic polynomial time adver-


sary A, the following probability is negligibly higher than 21 .
 
$
g←
−G
 $ $ $ $ $

a ←− Z , b ←
− Z , k ←− Z , k ←
− Z , e ←
− Z 
Pr 
 p p 0 p 1 p p 
$ 
 d← − {0, 1} 
d′ ← A(g, g a , g b , g ae , e(g, g)abkd , g e+bkd , g k0 )

Lemma 4. Assumption 3 implies assumption 4.

Proof. We show that if there exists an adversary A, against assumption 4, it could


be used to construct another adversary B, against assumption 3. B receives as input
g, g a , g b , g ae , e(g, g)abk , g e+bk , and Ωd ∈ {g k , R}, where R is uniformly random in G. B
implicitly sets k0 = logg Ωd , and invokes
A(g, g a , g b , g ae , e(g, g)abk , g e+bk , g k0 ). It is easy to see that if A can distinguish between the
two possible cases, so can B. Hence, the lemma holds. ⊔

Assumption 5 Given G, G1 , e : G × G → G1 , for all probabilistic polynomial time adver-


sary A, the following probability is negligibly higher than 12 .
 
$
g← −G
$ $ $ $ $
 
 a←− Z , b ←− Z , k ←− Z , k ←
− Z , e ←− Z 
Pr  p p 0 p 1 p p 
 $ 
 d← − {0, 1} 
′ a b k k ae
d ← A(g, g , g , g , g , g , e(g, g)
0 1 abk d ,g e+bk d )

Lemma 5. Assumption 4 implies assumption 5.

73
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

Proof. Let us assume that


 
$
g←
−G
$ $ $ $ $
 
 a←− Z , b ←− Z , k ←− Z , k ←
− Z , e ←− Z 
X = Pr 
 p p 0 p 1 p p 
$ 
 d← − {0, 1} 
′ a b k k ae
d ← A(g, g , g , g , g , g , e(g, g)
0 1 abk d ,g e+bk d )
 
$
g←
−G
 $ $ $ $ $ $

a ←
− Z , b ←
− Z , k ←
− Z , k ←− Z , k ←− Z , e ←
− Z 
A = Pr 
 p p 0 p 1 p 2 p p 
$ 
 d← − {0, 2} 
′ a b k k ae
d ← A(g, g , g , g , g , g , e(g, g)
0 1 abk d ,g e+bk d )
and  
$
g←
−G
 $ $ $ $ $ $

a ←
− Z , b ←
− Z , k ←
− Z , k ←− Z , k ←− Z , e ←
− Z 
B = Pr  p p 0 p 1 p 2 p p 
 $ 
 d← − {1, 2} 
′ a b k k ae
d ← A(g, g , g , g , g , g , e(g, g)
0 1 abk d ,g e+bk d )
From triangular inequality, we can say that A + B ≥ X. Again, from assumption 3, we
can say that A = B. Hence, the result holds. ⊔

Theorem 1. Our PEKS scheme is secure according to the Keyword Privacy notion.

Proof. We show that if there exists an adversary A = (A0 , A1 , A2 ), against our PEKS
scheme, then using it, one can construct another adversary B against the assumption 5. B
works as follows:
B receives as inputs g, g a , g b , g k0 , g k1 , g ae , and the challenge Ωd = (Ωd1 , Ωd2 ) = (e(g, g)abkd , g e+bkd ).
Here, d ∈R {0, 1}. B implicitly sets P kS = g b , and P kR = g a . First B invokes A0 with
all the required inputs, and receives P kC , and ΠC . B receives SkC from the simulation
of A0 or by extracting the NIZK proof ΠC . B invokes A1 with the necessary inputs.
Let, K = {g k0 , g k1 }. B answers all queries of A1 to the oracles Enc(·), and T rapdoor(·).
Apart from these, B also needs to answer the queries to the hash oracle H(·). The hash
queries can be made either directly by A1 , or through an encryption/trapdoor query by
A1 . Whenever, A1 makes a query to the Enc(·) oracle with the input w, B selects random
x ∈R Zp , and sets H(w) = g x . B adds ⟨w, x, g x ⟩ to list H. B also selects random r ∈R Zp ,
and computes C1 = P kSx g r , and C2 = P kR r . It then returns Enc(w) = (C , C ). If A
1 2 1
makes a trapdoor query for a keyword w, then B selects a random y ∈R Zp , and computes
H(w) = g y . B adds ⟨w, y, g y ⟩ to list H. B also selects random r̃ ∈R Zp , and computes
y r̃
T1 = e(P kS , P kR g ), and T2 = e(P kT , g r̃ ). B returns T (w) = (T1 , T2 ). Whenever A makes
a direct query to the H(·) oracle for a keyword w, B selects a random element h from G.
If K = ∅, B assigns H(w) = h. Else, it selects δ ∈R K, and assigns
(
h with probability 1/Q
H(w) =
g kt with probability 1 − 1/Q

B updates K as K = K \ {H(w)}. B returns H(w).


Now, A1 should output the two challenge keywords: w̃0 , and w̃1 . As required, A1 must
not have requested for either an encryption query or a trapdoor query for either of them.
A1 also returns P kC ∈ G. However, A1 could have made hash query against one or both
the challenge-keywords. If none of them were involved in a previous hash query, then B
74
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
assigns H(w̃0 ) = g k0 , and H(w̃1 ) = g k1 . Else, if there exists l ∈ {0, 1}, such that H(w̃l ) was
not queried, then assign H(w̃l ) = g kl . Now, if {H(w̃0 ), H(w̃1 )} ̸= {H(g k0 ), H(g k1 )}, then
B aborts and returns a random bit. Else, B generates Q̃ ciphertexts as C1 , C2 , . . . , CQ̃ .
Here, Ci = (C1i , C2i ), where C1i = Ωd2 ∗ g αi , C2i = g ae ∗ (g a )αi : i ∈ [1, Q̃], where αi ∈R Zp .
B also generates Q̃ trapdoors from e(g, g)abkd . It selects random βi ∈R Zp for i ∈ [1, Q̃],
and computes T1i = e(g, g)abkd ·e(P kS , g)βi , and T2i = e(P kC , P kS )βi , for all i ∈ [1, Q̃]. Let
Twi denote (T1i , T2i ), for all i ∈ [1, Q̃]. Now, B invokes A2 with all the necessary inputs,
including the Q̃ ciphertexts and trapdoors. B answers the queries of A2 to the encryption
and trapdoor oracles as it did for similar queries from A1 . B responds to all the hash
queries of A2 by returning randomly sampled elements from G. If A2 can identify the
value of d, so can B.
Let us now calculate the success probability of B. B returns a random bit when it needs
to abort. Thus, if it aborts, its success probability is 21 . If it does not abort, its success
probability is same as that of A. B aborts only when {H(w̃0 ), H(w̃1 )} = ̸ {H(g k0 ), H(g k1 )}.
This happens when A1 , makes a direct query H(w̃j ), but does not receive g k0 or g k1 ,
for some j ∈ {0, 1}. Hence, we can write P r[ExpASS6 B (λ) = 1] ≥ P r[B aborts ] ∗ 21 +
KW P 1 KW P (λ). Now,
P r[B does not abort ] ∗ P r[ExpA (λ)] = 2 + P r[B does not abort ] ∗ AdvA
the probability P r[B does not abort ] is at least F = minQ−2 1 i 1
i=0 (1 − Q ) Q2 . As such, F ≥
1
Q2
1
(1 − Q1 )Q−2 ≥ 4(Q−1) 2 . Thus, AdvA
KW P (λ) ≤ 4(Q − 1)2 ∗ Adv ASS6 (λ). Hence, the result
B
holds. ⊔

4.4 Privacy of Trapdoor


We now discuss a different privacy property of our scheme which is called privacy of
trapdoor. We show that our scheme does not allow an adversary to learn any information
from the trapdoor. For this purpose, let us consider the following security experiment.
ExpSP A
P (λ)

(G, G1 , p, g, e, H) ← Setup(λ)
(SkR , P kR ) ← KeyGenR (G, G1 , p, g, e, H)
(P kC , ΠC ) ← KeyGenC (G, G1 , p, g, e, H)
(st1 , P kS , P kT ) ← AKeyGen
0
S
(G, G1 , p, g, e, H, P kR , P kC , ΠC )
if e(P kT , g) ̸= e(P kC , P kS )
return 0
T rapdoor(·)
(w0∗ , w1∗ , st2 ) ← A1 (st1 )
$
d← − {0, 1}
Tw = T rapdoor(·, wd∗ , ·)
T rapdoor(·)
d′ ← A2 (st2 , Tw )
return d = d′
In this above security experiment, the challenger generates the public and secret keys
of the receiver and the cloud server. The adversary A = (A0 , A1 , A2 ) emulates the sender.
First, A0 generates the public keys for the sender. Then A1 is invoked by the challenger.
It produces two challenge keywords: w0∗ , and w1∗ . A1 can query the trapdoor oracle for any
keyword. The challenger flips a coin, and depending upon the output, chooses either w0∗ ,
or w1∗ . Then, the challenger generates a trapdoor Tw for the chosen keyword. Now, the
challenger invokes A2 with the trapdoor Tw . A2 has to guess whether Tw corresponds to
w0∗ , or w1∗ . The advantage of the adversary A, against the security experiment ExpSP
A
P (λ)

is denoted as AdvA SP P (λ). We define it as,

SP P 1
AdvA (λ) = P r[ExpSP
A
P
(λ) = 1] −
2
75
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
We say that the PEKS scheme provides SPP security, if for all probabilistic polynomial
time adversary A = (A0 , A1 , A2 ), AdvA SP P (λ) ≤ negl(λ). Note that in this adversarial

model, we wanted the adversary to have the same knowledge, and capability as that of
the data sender. This is because, if the scheme offers SPP security when the data sender
is acting as the adversary, the scheme will obviously be secure as per the SPP security
notion against any other adversary than the cloud server.

Assumption 6 Given G, G1 , e : G × G → G1 , for all PPT adversary A, the following


probability is negligible higher than 21 .

$ $
 
g←
− G, ∆ ←
− G1
 $ 

 a← − Zp 

a, Ω ← $
Ω ← ∆ − G
 
Pr  0
 1 1
$


 d←− {0, 1} 

d′ ← A(g, g a , ∆, Ω )
d
returnd=d’

Lemma 6. Assumption 2 implies assumption 6.

Proof. We show that if there exists an adversary A, against the assumption 6, it could be
used in the construction of another adversary B against assumption 2. B receives as input
g a , g b , g c , and a challenge Ωd ∈R {e(g, g)abc , R}, where R ∈R Zp . B computes ∆ = e(g a , g b ).
Now, B invokes A(g, g a , ∆, Ωd ). A returns a bit d′ . B returns the same bit. It is easy to
see that the lemma holds. ⊔

Assumption 7 Given G, G1 , e : G × G → G1 , for all PPT adversary A, the following


probability is negligible higher than 21 .

$ $
 
g←
− G, ∆ ←
− G1
 $ 

 a← − Zp 

$
Ω ← ∆, Ω1 ← − G1 
 
Pr 
 0 $


 d ←
− {0, 1} 

d′ ← A(g, g a , ∆a , Ω )
d
returnd=d’

Lemma 7. Assumption 7 implies assumption 6.

Proof. We show that if there exists an adversary A, against the assumption 7, one can use
it to construct another adversary B, against the assumption 6. B receives as inputs the
following items: g, g a , ∆, and Ωd . It invokes A(g, g a , Ωd , ∆).Note that, if Ωd ̸= ∆a , then ∆
should appear as a random element of G1 to A. Hence, if A can distinguish between the
two cases, B can identify Ωd . Hence, the lemma holds. ⊔

Assumption 8 Given G, G1 , e : G × G → G1 , let us consider the following security


experiment ExpTARD (λ).

76
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

ExpTARD (λ)
$
g←
−G
$
a← − Zp
g ← A0 (G, G1 , e, g, g a )
k
$
r←
− Zp
$
Ω0 ← e(g k , g r ), Ω1 ←
− G1
$
d← − {0, 1}

d ← A1 (e(g k , g r )a , Ωd )
return d = d′

The advantage of an adversary A = (A0 , A1 ), against the security experiment ExpTARD (λ)
is given by
T RD 1
AdvA (λ) = P r[ExpTARD (λ) = 1] −
2
T RD (λ) ≤ negl(λ).
For all PPT adversary A = (A0 , A1 ), AdvA

Lemma 8. Assumption 7 implies assumption 8.

Proof. Let us assume that there exists an adversary A = (A0 , A1 ), against the security
experiment ExpTARD (λ). We show how another adversary B can be constructed using A.
B receives as inputs, g, g a , ∆a , and Ωd . B invokes A, and receives g k . B implicitly assigns
∆ = e(g k , g r ). If A1 can identify whether Ωd = ∆ or Ωd is random, so can B. Hence, the
lemma holds. ⊔

Assumption 9 Given G, G1 , e : G × G → G1 , let us consider the following security


experiment ExpTARD1 (λ).

ExpTARD1 (λ)
$
g←
−G
$
a←
− Zp
$ $
b0 ←
− Zp ,b1 ←
− Zp
$
c← − Zp
g ← A0 (G, G1 , e, g, g a , g b0 , g b1 , g c )
k
$
r←
− Zp
$
Ω0 ← e(g k , g c·b0 +r ), Ω1 ←
− e(g k , g c·b1 +r )
$
d← − {0, 1}

d ← A1 (g, g a , e(g k , g r )a , Ωd )
return d = d′

The advantage of an adversary A = (A0 , A1 ), against the security experiment ExpTARD1 (λ)
is given by
T RD1 1
AdvA (λ) = P r[ExpTARD1 (λ) = 1] −
2
T RD1 (λ) ≤ negl(λ).
For all PPT adversary A = (A0 , A1 ), AdvA

77
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
Lemma 9. Assumption 8 implies assumption 9.
Proof. Let us consider another security experiment ExpTARD2 (λ) as follows.

ExpTARD2 (λ)
$
g←
−G
$
a←
− Zp
$ $
− Zp ,b1 ←
b0 ← − Zp
$
c← − Zp
g ← A0 (G, G1 , e, g, g a , g b0 , g b1 , g c )
k
$
r←
− Zp
$
R←
− G1
$
− e(g k , g c·b1 ) ∗ R
Ω0 ← e(g k , g c·b0 ) ∗ R, Ω1 ←
$
d← − {0, 1}

d ← A1 (g, g a , e(g k , g r )a , Ωd )
return d = d′

Let AdvA T RD2 (λ) be the advantage of an adversary A = (A , A ), against ExpT RD2 (λ).
0 1 A
Let us define it as
T RD2 1
AdvA (λ) = P r[ExpTARD2 (λ) = 1] −
2
Now, AdvA T RD1 (λ) + Adv T RD2 (λ) = P r[ExpT RD1 (λ) = 1] + P r[ExpT RD2 (λ) = 1] − 1 =
A A A
(P r[d = 0] ∗ P r[ExpTARD1 (λ) = 1 d = 0] + P r[d = 1] ∗ P r[ExpTARD1 (λ) = 1 d = 1]) +
(P r[d = 0] ∗ P r[ExpTARD2 (λ) = 1 d = 0] + P r[d = 1] ∗ P r[ExpTARD2 (λ) = 1 d = 1]) − 1 =
1 T RD1 (λ) = 1 d = 0] + P r[ExpT RD2 (λ) = 1 d = 0]) + 1 (P r[ExpT RD1 (λ) = 1 d =
2 (P r[ExpA A 2 A
1] + P r[ExpTARD2 (λ) = 1 d = 1]) − 1. It is easy to see that P r[ExpTARD (λ) = 1 d = 0] =
P r[ExpTARD1 (λ) = 1 d = 0] = P r[ExpTARD1 (λ) = 1 d = 1], and P r[ExpTARD (λ) = 1 d =
1] = P r[ExpTARD2 (λ) = 1 d = 0] = P r[ExpTARD2 (λ) = 1 d = 1]. Hence, AdvA T RD1 (λ) +

AdvA T RD2 T RD T RD T
(λ) ≤ P r[ExpA (λ) = 1] + P r[ExpA (λ) = 1] − 1 = 2 ∗ AdvA (λ). Thus, RD

AdvA T RD1 (λ) ≤ 2 ∗ AdvAT RD (λ). Hence, the lemma holds. ⊔



Lemma 10. Our PEKS scheme is SPP secure.
Proof. We show that if there exists an adversary A = (A0 , A1 , A2 ), against the security
experiment ExpSP A
P (λ), it could be used in the construction of another adversary B =

(B0 , B1 ), against the security experiment ExpTARD1 (λ). B functions as follows: first B0
receives as input G, G1 , e, g a , g b0 , g b1 , and g c . B0 implicitly sets (SkR , P kR = c, g c ), and
(SkC , P kC ) = (a, g a ). B generates ΠC , the simulated NIZK proof of knowledge of SkC ,
given P kC . B0 invokes A0 with the necessary inputs. It generates the keys (P kS , P kT ).
B checks if e(P kT , g) = e(P kC , P kS ). B0 learns SkS from the simulation of A0 . Then B0
invokes A1 . B0 answers all the queries of A1 to the oracles T rapdoor(·), and H(·). The hash
oracle H(·) can be queried by A in two ways; 1) either directly or 2) indirectly through a
trapdoor query. Let X be {g b0 , g b1 }. Whenever a H(w) query is made (directly/indirectly)
for some trapdoor w, B0 , samples random α ∈R Zp , and computes A = g α . If X = ∅, B0
returns Y = g α . Else, B0 selects x ∈R X, and returns Y, where
(
x with probability 1/Q
Y=
A with probability 1 − 1/Q

78
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
Here, Q is the maximum number of queries made to the oracle H(·), including the direct
and indirect ones. B0 adds ⟨w, Y⟩ to LH . If x is returned, X is updated as X = X \ {x}.
When a trapdoor query is made by A1 for some keyword w, B0 selects random l ∈ Zp ,
and computes Tw1 = e(P kR , H(w))SkS ∗ e(P kS , g l ), and Tw2 = e(P kT , g l ). It then returns
Tw = (Tw1 , Tw2 ). Now, A1 returns two keywords: w0∗ , and w1∗ . B0 returns P kS . If for some
β ∈ {0, 1}, H(wβ∗ ) was queried but neither g b0 or g b1 was returned, B0 sends instruction to
B1 to return a random bit. This instruction is passed through the auxiliary variable st2 .
Otherwise B0 makes sure that {H(w0∗ ), H(w1∗ )} = {g b0 , g b1 } holds.
When B1 is invoked with the inputs e(g k , g r )a , and Ωd , it invokes A2 (st2 , Ωd , e(g k , g r )a ).
A2 ’s queries to the trapdoor and the hash oracles are responded in the same as was being
done for A1 . It is easy to see that if A2 can identify whether the trapdoor corresponds to
w0∗ , or w1∗ , B1 , can identify d.
We now calculate the success probability of B. If B1 aborts, then it returns a random
bit. Thus, if B aborts, its success probability would be 21 . If it does not abort, then its
success probability will be same as that of A, which is P r[ExpSP A
P (λ) = 1]. B has to
1
∗ ∗
abort if {g 0 , g 1 } ̸= {g , g }. Now, we know that B1 aborts if A1 queries H(w0∗ ) or
w w b 0 b 1

H(w1∗ ), but g b0 or g b1 is not returned. The probability of this event not happening is
at least Q12 (1 − Q1 )Q−2 . Thus, the probability that B will succeed is P r[ExpTB RD1 (λ) =
1] ≥ 12 ∗ P r[B aborts ] + P r[B does not abort ] ∗ P r[ExpSP A
P (λ)] = 1 ∗ P r[B aborts ] +
2
1 SP P 1 1 1 Q−2 SP P (λ) ≥ 1 + 1
P r[B does not abort ] ∗ ( 2 + AdvA (λ)) ≥ 2 + Q2 (1 − Q ) ∗ AdvA 2 4(Q−1)2

SP P (λ). Thus, Adv T RD1 (λ) ≥ 1 SP P (λ). SP P (λ) ≤ 4(Q−1)2 ∗
AdvA B 4(Q−1)2
AdvA Therefore, AdvA
AdvBT RD1 (λ). ⊔

5 User Key Indistinguishability

In this section, we discuss a different security property of our PEKS scheme. We call it
User Key Indistinguishability. We show that our scheme does not allow a probabilistic
polynomial time adversary to link a trapdoor to a user. Let us consider the following
security experiment ExpU A
KI (λ).

Definition 4. (UKI-Security).

ExpU
A
KI (λ)

(G, G1 , p, g, e, H) ← Setup(λ)
(SkR0 , P kR0 ) ← KeyGenR (G, G1 , p, g, e, H)
(SkR1 , P kR1 ) ← KeyGenR (G, G1 , p, g, e, H)
(P kC , ΠC ) ← KeyGenC (G, G1 , p, g, e, H)
(st1 , P kS , P kT ) ← A0KeyGenS (G, G1 , p, g, e, H, P kR0 , P kR1 , P kC , ΠC )
if e(P kT , g) ̸= e(P kC , P kS )
return 0
T rapdoor(·),H(·)
(w, st2 ) ← A1 (st1 )
$
d←− {0, 1}
Tw = T rapdoor(SkRd , w, ·)
T rapdoor(·),H(·)
d′ ← A2 (st2 , Tw )
return d = d′

79
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

The advantage of an adversary A = (A0 , A1 , A2 ), against the security experiment


ExpU
A
KI (λ) is denoted as Adv U KI (λ). We define it as follows:
A

U KI U KI 1
AdvA (λ) = P r[AdvA (λ)] −
2

We say that a PEKS scheme is UKI-secure if for all probabilistic polynomial time
U KI (λ) is negligible.
adversary A, AdvA

Theorem 2. Our PEKS scheme offers UKI-security.

Proof. We show that if there exists an adversary A = (A0 , A1 , A2 ), against the security
experiment ExpU A
KI (λ), it could be used in the construction of another adversary B =

(B0 , B1 ). B works as follows:


When B0 is invoked with the inputs G, G1 , e, g, g a , g b0 , g b1 , g c , B0 implicitly sets (SkR0 , P kR0 ) =
(b0 , g b0 ), (SkR1 , P kR1 ) = (b1 , g b1 ), and (SkC , P kC ) = (a, g a ). B0 generates simulated proof
ΠC of knowledge of SkC , given P kC . B0 invokes A0 () with G, G1 , p, g, h, e, H, P kR0 , P kR1 , P kC , ΠC .
A0 returns st1 , P kS , and P kT . B0 learns about SkS from the simulation of A0 . Now, B0
invokes A1 . B0 answers all queries of A1 to the oracles T rapdoor(·), and H(·). Whenever,
T rapdoor(w) is queried, a H(w) is automatically made. This H(w) is an indirect query.
A1 can also make direct query to the H(·). when B0 receives a direct or indirect query to
the H(·) oracle, for a keyword w, B0 samples a random α ∈R Zp , and computes R = g α .
If H(w̃) = g c has been returned for a previous query of the form H(w̃), then B0 returns
H(w) = R. If g c has never been returned for a previous hash query, then B0 returns
H(w) = A, where;
(
g c with probability 1/Q
H(w) =
R with probability 1 − 1/Q

Now, we describe how B0 answers to the queries of A1 to the T rapdoor(·) oracle.


Whenever B0 receives a query to the trapdoor oracle for a keyword w, and a receiver
key P ku ∈ {P kR0 , P kR1 }, it samples random r̃, and computes T1 = e(P ku , H(w))SkS ∗
e(P kS , g r̃ ), T2 = e(P kT , g r̃ ). B0 returns Tw = (T1 , T2 ) as the response.
Now, A1 returns a keyword w∗ . If H(w∗ ) was previously queried and g c was not
returned, B0 sends instruction to B1 to abort and return a random bit. This instruct
is passed to B1 via the state variable st2 . If H(w∗ ) was never queried before, B0 sets
H(w∗ ) = g c . Now, B0 returns P kS . When B1 is invoked with the inputs T2 , and T1 , then
it invokes A2 with these inputs. B1 answers all the queries of A2 to the trapdoor oracle in
the same way as was done by B0 . B1 answers the hash queries of A2 by returning randomly
sampled elements from G. It is easy to see that if A2 can identify the challenged input, so
can B1 .
Let us calculate the probability of success of B. B1 aborts and returns a random bit
if A0 becomes unsuccessful in setting H(w∗ ) = g c . This can happen in one of the two
situations:
1) A1 queries H(w∗ ), but B0 fails to return g c .
2) A1 does not query H(w∗ ), but B0 returns g c in response to a different trapdoor query
by A1 .
Let, P1 , and P2 denote the probability of occurrence of situation 1, and 2 respectively.
Hence, P1 = 1 − (1 − Q1 )i Q1 . Here, i is such that the i + 1’th hash query correspond to the
keyword H(w∗ ). P1 is maximised when i = Q−1, that is if the last hash query corresponds
to the challenge keyword. Similarly, P2 = 1 − (1 − Q1 )Q . So, the probability that B1 will

80
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
1 Q−1 1
have to abort is at most max(P1 , P2 ) = 1 − (1 − Q) Q. So, the least probability that
B1 will not abort is (1 − Q1 )Q−1 Q1 ≥ 4(Q−1)
1
.
Now, P r[ExpTB RD1 (λ) = 1] ≥ P r[B1 will abort ]∗ 21 +P r[B1 will not abort ]∗P r[ExpU A
KI (λ) =
1 U KI 1 1 U KI T RD1
1] = 2 +P r[B1 will not abort ]∗AdvA (λ) ≥ 2 + 4(Q−1) ∗AdvA (λ). Thus, AdvB (λ) ≥
1 U KI U KI T RD1
4(Q−1) ∗AdvA (λ). That is, AdvA (λ) ≤ 4(Q−1)∗AdvB (λ). Hence, the result holds.

6 Server Key Indistinguishability

In this section, we show that our scheme satisfy another security definition, which we call
Server Key Indistinguishability. This security notion aims to assess the invulnerability of a
PEKS scheme against an attack that aims to link a trapdoor to a cloud server for which it
is generated. That is a PPT adversary will not be able to tell from a trapdoor which server
key it corresponds to. For this purpose, we consider an adversary that can compromise
the data sender. We underscore the fact that if the data owner cannot breach the server
key indistinguishability property of our PEKS scheme, no other adversary can do it. Let
us now consider the following security experiment.

Definition 5. (SKI-security).

ExpSKI
A (λ)
(G, G1 , p, g, e, H) ← Setup(λ)
(SkR , P kR ) ← KeyGenR (G, G1 , p, g, e, H)
(P kC0 , ΠC0 ) ← KeyGenC (G, G1 , p, g, e, H)
(P kC1 , ΠC1 ) ← KeyGenC (G, G1 , p, g, e, H)
(st1 , P kS0 , P kS1 , P kT 0 , P kT 1 ) ←
KeyGenS
A0 (G, G1 , p, g, e, H, P KR , P kC0 , P kC1 , ΠC0 , ΠC1 )
if ∃j ∈ [0, 1] : e(P kT j , g) ̸= e(P kCj , P kSj )
return 0
T rapdoor(·),H(·)
(w, st2 ) ← A1 (st1 )
$
d←− {0, 1}
Tw = T rapdoor(SkR , w, P kSd , P kT d ·)
T rapdoor(·),H(·)
d′ ← A2 (st2 , Tw )
return d = d′

The advantage of an adversary A = (A0 , A1 , A2 ), against the security experiment is


defined as
SKI 1
AdvA (λ) = P r[ExpSKI
A (λ) = 1] −
2
The PEKS scheme will be secure in terms of Server Key Indistinguishability notion if for
SKI (λ) ≤ negl(λ).
all PPT adversary A = (A0 , A1 , A2 ), AdvA

Assumption 10 Given G, G1 , e : G × G → G1 , let us consider the following security


experiment ExpTARD3 (λ).
81
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

ExpTARD3 (λ)
$
g←
−G
$
a←
− Zp
$ $
c←− Zp , f ← − Zp
(k, st) ← A0 (G, G1 , e, g, g a , g c , g f )
$
A←
− G1
$
B←
− G1
$
r←− Zp
Ω0 ← (e(g , g )a , e(g k , g r+c·f ))
k r

Ω1 ← (A, B)
$
d←− {0, 1}

d ← A1 (st, Ωd )
return d = d′

The advantage of an adversary A = (A0 , A1 ), against the security experiment ExpTARD3 (λ)
is given by
T RD3 1
AdvA (λ) = P r[ExpTARD3 (λ) = 1] −
2
T RD3 (λ) ≤ negl(λ).
For all PPT adversary A = (A0 , A1 ), AdvA

Lemma 11. Assumption 2 implies assumption 10.

Proof. We show that if there exists an adversary A = (A0 , A1 ), against the security
experiment ExpTARD3 (λ), it could be used in the construction of another adversary B,
against the security experiment ExpBDH B (λ). B receives as input g x , g y , g z , and a chal-
lenge Ωd ∈R {e(g, g)xyz , R}, where R ∈R G1 . B selects random c, f ∈R Zp , and invokes
A0 (G, G1 , e, g, g x , g c , g f ). A0 returns k ∈ Zp . Now, B invokes A1 with inputs Ωdk , and
e(g y , g z )k ∗ e(g, g)kcf . B returns what A1 returns. It is easy to see that AdvA T RD3 (λ) ≤

AdvBBDH (λ). ⊔

Lemma 12. Given G, G1 , e : G × G → G1 , let us consider the following security experi-


ment ExpTARD4 (λ).

ExpTARD4 (λ)
$
g←
−G
$ $
a0 ←
− Zp ,a1 ←
− Zp
$ $
c←
− Zp ,f ← − Zp
(k0 , k1 , st) ← A0 (G, G1 , e, g, g a0 , g a1 , g c , g f )
$
r← − Zp
Ω0 ← (e(g , g )a0 , e(g k0 , g r+c·f ))
k 0 r

Ω1 ← (e(g k0 , g r )a1 , e(g k0 , g r+c·f ))


$
d←− {0, 1}

d ← A1 (st, Ωd )
return d = d′

82
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

The advantage of an adversary A = (A0 , A1 ), against the security experiment ExpTARD4 (λ)
is given by
T RD4 1
AdvA (λ) = P r[ExpTARD4 (λ) = 1] −
2
T RD4 (λ) ≤ negl(λ).
For all PPT adversary A = (A0 , A1 ), AdvA

Proof. Let us consider the following security experiment:

ExpTARD5 (λ)
$
g←
−G
$ $
A←
− G1 , B ←
− G1
$ $
C←
− G1 , D ←
− G1
Ω0 ← (A, B)
Ω1 ← (C, D)
$
d← − {0, 1}

d ← A(G, G1 , e, g, Ωd )
return d = d′

The advantage of an adversary A, against the security experiment ExpTARD5 (λ) is given
by
T RD5 1
AdvA (λ) = P r[ExpTARD5 (λ) = 1] −
2
Now, AdvA T RD4 (λ) + Adv T RD5 (λ) = P r[ExpT RD4 (λ) = 1] + P r[ExpT RD4 (λ) = 1] − 1 =
A A A
P r[d = 0] ∗ P r[ExpTARD4 (λ) = 1 d = 0] + P r[d = 1] ∗ P r[ExpTARD4 (λ) = 1 d = 1] + P r[d =
0] ∗ P r[ExpTARD5 (λ) = 1 d = 0] + P r[d = 1] ∗ P r[ExpTARD5 (λ) = 1 d = 1] − 1 = (P r[d =
0] ∗ P r[ExpTARD4 (λ) = 1 d = 0] + P r[d = 0] ∗ P r[ExpTARD5 (λ) = 1 d = 0]) + (P r[d =
1] ∗ P r[ExpTARD4 (λ) = 1 d = 1] + P r[d = 1] ∗ P r[ExpTARD5 (λ) = 1 d = 1]) − 1 ≤ 12 ∗
P r[ExpTARD3 (λ) = 1 d = 0] + 12 ∗ P r[ExpTARD3 (λ) = 1 d = 1] + 21 ∗ P r[ExpTARD3 (λ) = 1 d =
0]+ 12 ∗P r[ExpTARD3 (λ) = 1 d = 1]−1 = 2∗(P r[d = 0]∗P r[ExpTARD3 (λ) = 1 d = 0]+P r[d =
1] ∗ P r[ExpTARD3 (λ) = 1 d = 1] − 12 ) = 2 ∗ (P r[ExpTARD3 (λ) = 1] − 12 ) = 2 ∗ AdvAT RD3 (λ).

Hence, AdvA T RD4 T


(λ) ≤ 2 ∗ AdvA RD3 (λ). Thus, the result holds. ⊔

Theorem 3. Our PEKS scheme is secure according to the SKI security notion.

Proof. We show that if there exists an adversary A = (A0 , A1 ), against the security exper-
iment ExpSKI A (λ), it could be used in the construction of another adversary B = (B0 , B1 )
against the security experiment ExpTB RD4 (λ). B works as follows. When B0 is invoked with
G, G1 , e, g, g a0 , g a1 , g c , g d , it implicitly assigns (SkCi , P kCi ) = (ai , g ai ) for i = 0, 1. B0 also
assigns (SkR , P kR ) = (c, g c ). It also computes ΠC0 , and ΠC1 using simulation method.
Then B0 invokes A0 with all the necessary inputs. A0 returns st1 , P kS0 , P kS1 , P kT 0 , and
P kT1 . B0 can find SkS0 , and SkS1 from the simulation of A0 . Now, B0 invokes A1 (st1 ). B0
responds to all the queries of A1 to the oracles T rapdoor(·), and H(·). When A1 makes
a call to the T rapdoor(·) oracle, it specifies the sender’s public key (P kS0 or P kS1 ), and
the corresponding keyword. In answering a trapdoor query, B0 needs to first make a H(·)
query. We call such queries indirect queries to the hash oracle. These indirect queries to the
hash oracle are replied in the same way as the direct queries to the hash oracles. Whenever
B0 receives a hash query against a keyword w, B0 selects a random element x ∈R Zp , and
checks whether there is an entry of the form ⟨w, −, g d ⟩ in listH. If there is an entry of the
83
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024

form ⟨w, −, g d ⟩ in listH, B0 returns H(w) = g x , and adds ⟨w, x⟩ to the list listH. If there
is no entry of the form ⟨w, −, g d ⟩ in listH, B0 returns H(w) = R, where
(
g d with probability 1/Q
R=
g x with probability 1 − 1/Q

Here, Q is the total number of hash queries made by A1 . If R = g d , then B0 adds ⟨w, −, g d ⟨
to listH. Else if R = g x then B0 adds ⟨w, x, g x ⟨ to listH. Whenever a trapdoor query is
received for a keyword w, and a sender key P kS , and a trapdoor key P kT , B0 first answers
to H(w). After this hash query listH will have an entry ⟨w, α, g x ⟩, where α is either x or
−. Then B0 selects a random r ∈R Zp , and computes Tw1 = e(P kR , g x )SkS ∗ e(P kS , g r ),
Tw2 = e(P kT , g r ), and Tw = (Tw1 , Tw2 ). Note that B0 knows the value of both SkS0 ,
and SkS1 . So, it can compute Tw1 . Then B0 returns Tw as the response for the trapdoor
query. Then A1 returns a keyword ŵ, and its auxiliary variable st2 . Now, B0 returns
(P KS0 , P kS1 , st2 ). If H(ŵ) was queried, and g c was not returned, then B0 sends instruction
to B1 to return a random bit. This instruction is sent through the auxiliary variable
that is generated by B0 , and is fed to B1 as an input. When B1 receives the challenge
Ωd = (A1 , A2 ), B1 computes T1 = A2 , and T2 = A1 . If B0 has sent it instruction to return
a random bit, then it does so. Otherwise, it invokes A2 (st2 , (T1 , T2 )). B1 answers all the
queries of A2 to the oracles T rapdoor(·), and H(·) the same way as B0 did for A1 . Finally,
A2 will return a bit d′ . B1 should return the same bit.
Let us now calculate the success probability of B. B1 aborts and returns a random bit
if B0 cannot set H(ŵ) = g c . Since, B0 returns g c with probability 1/Q for a random hash
query, the probability that B0 would be able to set H(ŵ) = g c is at least (1 − Q1 )Q−1 Q1 ≥
1 1 Q−1 1 1
4(Q−1) . So, the least probability that B1 will not abort is (1 − Q ) Q ≥ 4(Q−1) .
Now, P r[ExpTB RD4 (λ) = 1] ≥ P r[B1 will abort ]∗ 12 +P r[B1 will not abort ]∗P r[ExpSKI A (λ) =
1 SKI 1 1 SKI T RD4
1] = 2 +P r[B1 will not abort ]∗AdvA (λ) ≥ 2 + 4(Q−1) ∗AdvA (λ). Thus, AdvB (λ) ≥
1 U KI SKI T RD4
4(Q−1) ∗AdvA (λ). That is, AdvA (λ) ≤ 4(Q−1)∗AdvB (λ). Hence, the result holds.

7 Performance

In this section, we analyse the performance of our encryption scheme. In Table 1 we present
the number of operations needed for each of the component of our scheme and compare the
performance of our scheme with that of the PAEKS [14]. In Table 2 we provide comparative
study with other searchable schemes.

PAEKS approach of [14]: In [14], authors took 3 exponentiation operations and 1


hashing for encryption which is same in our encryption. For trapdoor generation PAEKS
needs one exponentiation one hashing and one mapping whereas we need one exponenti-
ation, one hashing and two mapping. For searching, i.e. Test function, PAEKS need only
2 mapping whereas we need one exponentiation and two mapping(see Table 1). At the
expence of these few extra computation we provide UKI security (see Definition 4) and
SKI security (see Definition 4).

8 Conclusion

n this paper, we propose a new symmetric searchable encryption scheme that supports
phrase search. Our scheme encrypts a sentence of a document separately. The encryption

84
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
Operation #(Exp) (Mult) #(Bp) #(Div) #(Hash)
Encryption 3 1 0 0 1
Trapdoor 1 1 2 0 1
Test 1 0 2 2 0
Table 1. Complexity of our scheme in terms of number of exponentiation(Exp), Multiplication (Mult) and
Bilinear map (Bp), Division (Div) in G and Inverse (Inv) in Zp operations.

This
Property SSE [12] PSS [23] [25] LPSSE [17]Πss [13] PAEKS
paper
[14]
Non-adaptive securityyes yes no yes yes yes yes
Adaptive security yes no no no no yes yes
Security against
no no no no yes yes yes
active adversary
Probabilistic
no no no no yes yes yes
trapdoor
trusted
Client storage no dictionary no no no no
server
No. of rounds 1 2 2 1 1 1 1
Storage cost O(N ) O(N ) O(N ) O(N ) O(N ) O(N ) O(N )
No. of encryptions
O(N ) O(N ) O(N ) O(N ) 1 1 1
per keyword
Leakage following a
– yes yes yes yes no no
search
UKI security no no no no no no yes
SKI security no no no no no no yes
Table 2. Properties and performances of different searchable encryption schemes. Search time is per
keyword, where N is the number of documents.

of a sentence yields a set of encrypted keywords and an adjacency matrix that stores the
ordering information about the keywords in the sentence. To search in a ciphertext cor-
responding to a sentence, a trapdoor is required. When the server searches in a sentence
using a trapdoor, the server does not learn the physical position of the phrase in the
encrypted sentence. The proposed scheme avoids the leakage associated with the search-
able encryption scheme by previous works. We have proved the security properties of our
scheme. We have also implemented the scheme using C and the OpenSSL library. Our
theoretical and experimental results show that our scheme is suitable for deployment in
real world scenarios.
Acknowledgement : The authors are thankful to the anonymous reviewers, whose com-
ments greatly improved the quality of the manuscript. We also wish to thank Professor
Feng Hao and Dr Samiran Bag for providing several useful and valuable suggestions.

References
1. http://www.fon.hum.uva.nl/david/ma ssp/2007/timit/train/dr5/fsdc0/. 2007.
2. Samiran Bag, Indranil Ghosh Ray, and Feng Hao. A new leakage resilient symmetric searchable
encryption scheme for phrase search. In 19th International Conference on Security and Cryptography
SECRYPT, pages 366–373. SciTePress, 2022.
3. Dan Boneh, Xavier Boyen, and Hovav Shacham. Short group signatures. In Annual international
cryptology conference, pages 41–55. Springer, 2004.
4. Dan Boneh, Giovanni Di Crescenzo, Rafail Ostrovsky, and Giuseppe Persiano. Public key encryption
with keyword search. In Christian Cachin and Jan L. Camenisch, editors, Advances in Cryptology -
EUROCRYPT 2004, pages 506–522, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg.

85
International Journal on Cybernetics & Informatics (IJCI) Vol.13, No.2, April 2024
5. Dan Boneh, Giovanni Di Crescenzo, Rafail Ostrovsky, and Giuseppe Persiano. Public Key Encryption
With Keyword Search. In International Conference on the Theory and Applications of Cryptographic
Techniques, pages 506–522. Springer, 2004.
6. Dan Boneh, Eyal Kushilevitz, Rafail Ostrovsky, and William E Skeith III. Public Key Encryption
That Allows PIR Queries. In Annual International Cryptology Conference, pages 50–67. Springer,
2007.
7. Ning Cao, Cong Wang, Ming Li, Kui Ren, and Wenjing Lou. Privacy-Preserving Multi-Keyword
Ranked Search Over Encrypted Cloud Data. volume 25, pages 222–233. IEEE, 2014.
8. David Cash, Joseph Jaeger, Stanislaw Jarecki, Charanjit S. Jutla, Hugo Krawczyk, Marcel-Catalin
Rosu, and Michael Steiner. Dynamic searchable encryption in very-large databases: Data structures
and implementation. In 21st Annual Network and Distributed System Security Symposium, NDSS
2014, San Diego, California, USA, February 23-26, 2014. The Internet Society, 2014.
9. David Cash, Stanislaw Jarecki, Charanjit Jutla, Hugo Krawczyk, Marcel-Cătălin Roşu, and Michael
Steiner. Highly-Scalable Searchable Symmetric Encryption With Support for Boolean Queries. In
Advances in Cryptology–CRYPTO 2013, pages 353–373. Springer, 2013.
10. Melissa Chase and Emily Shen. Substring-searchable symmetric encryption. Proceedings on Privacy
Enhancing Technologies, 2015(2):263 – 281, 01 Jun. 2015.
11. Reza Curtmola, Juan Garay, Seny Kamara, and Rafail Ostrovsky. Searchable symmetric encryption:
Improved definitions and efficient constructions. In Proceedings of the 13th ACM Conference on Com-
puter and Communications Security, CCS ’06, page 79–88, New York, NY, USA, 2006. Association for
Computing Machinery.
12. Reza Curtmola, Juan Garay, Seny Kamara, and Rafail Ostrovsky. Searchable Symmetric Encryption:
Improved Definitions and Efficient Constructions. volume 19, pages 895–934. IOS Press, 2011.
13. I. Ghosh Ray, Y. Rahulamathavan, and M. Rajarajan. A new lightweight symmetric searchable
encryption scheme for string identification. volume 8, pages 672–684, 2020.
14. Qiong Huang and Hongbo Li. An efficient public-key searchable encryption scheme secure against
inside keyword guessing attacks. volume 403-404, pages 1 – 14, 2017.
15. Seny Kamara, Charalampos Papamanthou, and Tom Roeder. Dynamic Searchable Symmetric Encryp-
tion. In Proceedings of the 2012 ACM conference on Computer and communications security, pages
965–976. ACM, 2012.
16. Z. A. Kissel and J. Wang. Verifiable phrase search over encrypted data secure against a semi-honest-
but-curious adversary. In 2013 IEEE 33rd International Conference on Distributed Computing Systems
Workshops, pages 126–131, 2013.
17. Mingchu Li, Wei Jia, Cheng Guo, Weifeng Sun, and Xing Tan. LPSSE: Lightweight Phrase Search
With Symmetric Searchable Encryption in Cloud Storage. In Information Technology-New Generations
(ITNG), 2015 12th International Conference on, pages 174–178. IEEE, 2015.
18. M. Naveed, M. Prabhakaran, and C. A. Gunter. Dynamic searchable encryption via blind storage. In
2014 IEEE Symposium on Security and Privacy, pages 639–654, 2014.
19. Dawn Xiaoding Song, David Wagner, and Adrian Perrig. Practical Techniques for Searches on En-
crypted Data. In Security and Privacy, 2000. S&P 2000. Proceedings. 2000 IEEE Symposium on,
pages 44–55. IEEE, 2000.
20. Emil Stefanov, Charalampos Papamanthou, and Elaine Shi. Practical Dynamic Searchable Encryption
With Small Leakage. In NDSS, volume 14, pages 23–26, 2014.
21. Shi-Feng Sun, Joseph K. Liu, Amin Sakzad, Ron Steinfeld, and Tsz Hon Yuen. An efficient non-
interactive multi-client searchable encryption with support for boolean queries. In Ioannis Askoxylakis,
Sotiris Ioannidis, Sokratis Katsikas, and Catherine Meadows, editors, Computer Security – ESORICS
2016, pages 154–172, Cham, 2016. Springer International Publishing.
22. Wenhai Sun, Bing Wang, Ning Cao, Ming Li, Wenjing Lou, Y. Thomas Hou, and Hui Li. Privacy-
preserving multi-keyword text search in the cloud supporting similaritybased ranking. In IN ASIACCS
2013, 2013.
23. Yinqi Tang, Dawu Gu, Ning Ding, and Haining Lu. Phrase Search Over Encrypted Data With
Symmetric Encryption Scheme. In 2012 32nd International Conference on Distributed Computing
Systems Workshops, pages 471–480. IEEE, 2012.
24. B. Wang, S. Yu, W. Lou, and Y. T. Hou. Privacy-preserving multi-keyword fuzzy search over encrypted
data in the cloud. In IEEE INFOCOM 2014 - IEEE Conference on Computer Communications, pages
2112–2120, 2014.
25. S. Zittrower and C. C. Zou. Encrypted phrase searching in the cloud. In 2012 IEEE Global Commu-
nications Conference (GLOBECOM), pages 764–770, 2012.

86

You might also like