0% found this document useful (0 votes)
6 views67 pages

Chapter 3 - Syntax Analysis

Uploaded by

tesfayeararsa2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views67 pages

Chapter 3 - Syntax Analysis

Uploaded by

tesfayeararsa2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

AMBO UNIVERSITY, HHC

SCHOOL OF INFORMATICS and ELECTRICAL ENGINEERING


DEPARTMENT OF COMPUTER SCIENCE

Compiler Design

Chapter 3: Syntax Analysis

Instructor: Fikru Tafesse (MSc.)


Email: [email protected]

1
The Concept of Syntax Analysis

• The tokens generated by lexical analyzer are accepted by the next phase of compiler i.e.
syntax analyzer.
• The syntax analyzer or parser comes after lexical analysis.
• This is one of the important components of front end of the compiler.
• The syntax analysis is the second phase in compilation.
• The syntax analyzer (parser) basically checks for the syntax of the language.
• The syntax analyzer takes the tokens from the lexical analyzer and groups them in such
as a way that some programming structure(syntax) can be recognized.
• After grouping the token, if any syntax cannot be recognized then syntactic error will be
generated.
• This overall process is called syntax checking of the language. 2
Definition of Parser
• A parsing or syntax analysis is a process which takes the input string w and produces either parse tree
(syntactic structure) or generates the syntactic errors.
For example: a = b + 10;
• The above programming statement is the given to lexical analyzer.
• The lexical analyzer will divide it into group of tokens.
• The syntax analyzer takes the tokens as input and generates a tree like structure called parse tree.
• The parse tree drawn above is for some programming statement.
• It shows how the statement gets parsed according to their syntactic specification.

3
Role of Parser
• In the process of compilation, the parser and lexical analyzer work together.
• That means, when parser requires string tokens it invokes lexical analyzer.
• In turn, the lexical analyzer supplies tokens to syntax analyzer (parser).

4
Cont'd ...
• The parser collects sufficient number of tokens and builds a parse tree.
• Thus, by building the parse tree, parser smartly finds the syntactical errors if any.
• It is also necessary that the parser should recover from commonly occurring errors so that
remaining task of process the input can be continued.
Why lexical and Syntax Analyzer are separated?
• The lexical analyzer scans the input program and collects the tokens from it.
• Parser builds a parse tree using these tokens.
• There are two important activities and these activities are independently carried out by these
two phases of compiler.
• Separating out these two phases has two advantages:
1. It accelerates the process of compilation
2. The error in the source input can be identified precisely 5
Basic Issues in Parsing
• There are two important issues in parsing
1. Specification of syntax
2. Representation of input after parsing
• A very important issue in parsing is specification of syntax in programming language.
• Specification of syntax means how to write any programming language statements.
• There some characteristics of specification of syntax:
i. Specification should be precise and unambiguous
ii. Specification should be in detail
iii. Specification should be complete.
• Such specification is called "Context free Grammar" (CFG).
• Another important issue in parsing is representation of the input after parsing.
• Lastly the most crucial issue is the parsing algorithm 6
Context Free Grammars
• Specification of input can be done by using "Context Free Grammar".
• The context free grammar G is a collection of following things:
1. V is set of non-terminals
2. T is a set of terminals
3. S is a start symbol
4. P is a set of production rule
• Thus G can be represented as G=(V, T, S, P)
• The production rules are given in the following form:

7
Cont'd ...
• Example 1: let the language L= anbn where n>1
• Let G=(V, T, S, P), where V={S}, T = {a,b} and S is a start symbol then, given the
production rules.

• The production rules actually define the language anbn .


• The non-terminal symbol occurs at the left-hand side (LHS).
• These are the symbols which need to be expanded.
• The terminals symbols are nothing but the tokens used in the language.
• Thus, any language construct can be defined by the context free grammar.
8
Cont'd ...
• For example, if we want to define the declarative sentence using context free grammar then
it could be as follow: Using the given rules, we can derive:

State

Terminator
Type List

id ;
List ,
int

id
Fig. 3.4.Parse Tree for Derivation of int id , id ;

• Hence, int id, id; can be defined by means of the above context free grammars.
9
Cont'd ...

• Following rules to be followed while writing context free Grammars (CFG).


1. A single non-terminal should be at LHS.
2. The rule should be always in the form of LHS → RHS where RHS may be the
combination of non-terminal and terminal symbols.
3. The NULL derivation can be specified as NT → ε
4. One of the non-terminals should be start symbol and congenitally we should
write the rules for this non-terminal.

10
Derivation and Parse Tree
• Derivation from S means generation of string W from S.
• For constructing derivation two things are important.
i. Choice of Non-terminal from several others.
ii. Choice of rule from production rules for corresponding non-terminal.
Definition of Derivation Tree
• Let G = (V, T, P, S) be a Context Free Grammar.
• The derivation tree is a tree a which can be constructed by following properties:
i. The root has label S.
ii. Every vertex can be derived from (V U T U ε).
iii. If there exists a vertex A with children R1, R2, …., Rn then there should be production
A → R1, R2, …, Rn.
11
iv. The leaf nodes are from set T and interior nodes are from set V.
Cont'd ...

• Instead of choosing the arbitrary non-terminal one can choose:


i. Either leftmost non-terminal in a sentential form then it is called leftmost derivation.
ii. Or rightmost non-terminal in a sentential form, then it is called rightmost derivation.
Example 1: Let G be a context free grammar for which the production rules are given as
below:

• Derive the string aaabbabbba using above grammar.

12
Cont'd ...
• Solution: leftmost derivation rightmost derivation

B →aBB
B →aBB B →bS
B →aBB S →bA
B →bS A →a
S →bA B →aBB
A →a B →b
B →b B →bS
B →bS S→bA
S→bA A →a
A →a

13
Cont'd ...
• The parse tree for the above derivation is as given below:

• Leftmost derivation Rightmost derivation 14


Cont'd ...
• Example 2: Design a derivation tree for the following grammar:

• Also obtain the leftmost and rightmost derivation for the string ‘aaabaab’ using above
grammar.

• Solution:

15
Cont'd ...
• The derivation tree can be drawn as follows:

(a) Derivation Tree for Leftmost Derivation (b)Derivation Tree for Rightmost Derivation
16
Ambiguous Grammar
• A grammar G is said to be ambiguous if it generates more than one parse tree for a sentences
of a language L(G).
Example 1:
for input string id+id*id for input string id+id+id
E E
E + E
E + E
E + E id
id E + E
id id id id
a) Parse Tree 1 b) Parse Tree 2

• There are two different parse trees for deriving string id+id*id and id+id+id
17

Cont'd ...

Example 2: Find whether the following grammar is ambiguous or not.

Solution: we will construct parse tree for the string aab.

• There are two different parse trees for deriving string aab.
• Hence the above given grammar is an ambiguous grammar.
18
Parsing Techniques

• There are two parsing techniques. These are:


1. Top-down parsing
2. Bottom-up parsing
• These parsing techniques work on the following principles
• The parser scans input string from left to right and identifies that the derivation is
leftmost or rightmost.
• The parser makes use of production rules for choosing the appropriate derivation.
• The different parsing techniques use different approaches in selecting the appropriate
rules for derivation.
• And finally, a parse tree is constructed.
19
Parsing Techniques Cont’d …
• When the parse tree can be constructed from root and expanded to leaves the such type
of parser is called top-down parser.
• The name itself tells us that the parser tree can be built from top to bottom.
• When the parser tree can be constructed form leaves to root, then such type of parser is
called bottom-up parser.
• Thus, the parser tree is built in bottom-up manner.
• The next figure shows types of parsers.

20
Parsing Techniques Cont’d …

21
Top-down Parsing Techniques

There are types by which the top-down parsing can be performed.


1. Backtracking
2. Predictive Parsing 22
Backtracking
• Backtracking is a technique in which for expansion of no-terminal symbol we choose
one alternative and if some mismatch occurs then we try another alternative if any.
• Example:

• If for a non-terminal there are multiple production rules beginning with the same input
symbol to get the correct derivation, we need to try all these possibilities.
• Secondly, in backtracking we need to move some levels upward in order to check
possibilities.
• This increases lot of overhead in implementation of parsing. 23
Cont'd
• And hence it becomes necessary to eliminate the backtracking by modifying the
grammar.
• Backtracking will try different production rules to find the match for the input string
by backtracking each time.
• The backtracking is a powerful than predictive parsing.
• But this technique a backtracking parser is slower and it requires exponential time in
general.
• Hence, backtracking is not preferred for practical compilers.
Limitation:
• If the given grammar has more number of alternatives then the cost of
backtracking is high.
24
Predictive Parser
• As the name indicates predictive parser tries to predict the next construction using one or
more lookahead symbols from input string.
• There are two types of predictive parser:
i. Recursive Descent Parser
ii. LL (1) Parser
i. Recursive Descent Parser
• A parser that uses collection of recursive procedures for parsing the given input string is
called Recursive Descent (RD) Parser).
• This type of parser the CFG is used to build the recursive routines.
• The RHS of the production rule is directly converted to a program.
• For each non-terminal a separate procedure is written and body of the procedure (code) is
25
RHS of the corresponding non-terminal.
Cont'd

Basic steps for construction of RD Parser


• The RHS of the rule is directly converted into program code symbol by symbol.
1. If the input symbol is non-terminal then a call to the procedure corresponding to the no
terminal is made.
2. If the input symbol is terminal then it is matched with the lookahead from input.
3. If the production rule has many alternates then all these alternates has to be combined
into a single body of procedure.
4. The parser should be activated by a procedure a corresponding to the start symbol.

26
Cont'd

Example: Consider the grammar having start symbol S.

• To construct a parse top-down for the input string w=cad, begin with a tree consisting of
a single node labeled S and the input ptr. pointing to c, the first symbol of w.
• S has only one production, so we use it to expand S and obtain the tree as in figure.

• So leftmost leaf labeled c matches the first symbol of the


input w, so we advance the input ptr to a, the second symbol
of w and consider the next leaf labeled A.

27
Cont'd

• Now we advanced A using the leftmost alternative.


• We have a match for the second input symbol a, so we advance the input ptr to d, the third
input symbol and compare d against the next leaf, labeled b.
• Since b does not match d, we report failure and go back to A to see whether there is another
alternative for A that has not been tried, but that might produce a match.
• In going back to A, we must reset the input pointer to position 2, the position it had when first
came to A, which means that the procedure for A must store the input pointer in a local
28
variable.
Cont'd
ii. LL (1) Parser
• The simple block diagram for LL (1) parser is as given below.

29
Cont'd
• The simple block diagram for LL (1) parser is as given below.
• The data structure used by LL (1) are:
 Input buffer
 Stack
 Parsing table
• The LL (1) parser uses input buffer to store the input tokens.
• The stack is used to hold the left sentential form.
• The symbol table in RHS of rule are pushed into the stack in reverse order. i.e. from
right to left.
• Thus, use of stack makes this algorithm no recursive.
• The table is basically a two-dimensional array.
30
Cont'd
• The table has row for non-terminal and column for terminals.
• The table can be represented as M[A, a] where A is a non-terminal and a is current input
symbol.
• The parser works as follows:
• The parsing program reads top of the stack and a current input symbol.
• With the help of these two symbols the parsing action is determined.
• The parsing action can be:

31
Cont'd
Top Input token Parsing Action

$ $ Parsing successfully halt


a A Pop a and advance lookahead to next token
a B Error
A a Refer table M[A, a] if entry at M[A, a] is error report error

A a Refer table M[A, a] is entry at M[A, a] is A→PQR then pop A


then push R, then push Q, then push P

• The parser consults the table M[A, a] each time while taking the parsing actions hence
this type of parsing method is called table driven parsing algorithm.
• The configuration of LL (1) parser can be defined by top of the stack and a lookahead
token.
32
Cont'd
• One by one configuration is performed and the input is successfully parsed if the parser
reaches the halting configuration.
• When the stack is empty and next token is $ then it corresponds to successful parse.
• Configuration of Predictive LL (1) Parser
• The construction of predictive LL (1) parser is based on two very important functions
and those are FIRST and FOLLOW.
• For construction of Predictive LL (1) parser we have to follow the following steps:
1. Computation of FIRST and FOLLOW function
2. Construct the Predictive Parsing Table using FIRST and FOLLOW
functions
3. Stack Implementation
33
4. Construct Parse Tree.
Cont'd
First Function
• FIRST(α) is a set of terminal symbols that begins in strings derived from α.
Example: A→ abc/def/ghi
then FIRST(A) = {a, d, g}
Rules for Calculating FIRST Function
1. For production rule X → ε then FIRST(X) ={ε}
2. For any terminal symbol a then FIRST(a) ={a}
3. For production rule X→Y1 Y2 Y3
• Calculating FIRST(X)
If ε does not belongs to FIRST(Y1), then FIRST(X) = FIRST(Y1).

If ε belongs to FIRST(Y1), then FIRST(X) = {FIRST(Y1) - ε}U FIRST(Y2Y3). 34


Cont'd
• Calculating FIRST(Y2Y3)
If ε does not belongs to FIRST(Y2), then FIRST(Y2Y3) = FIRST(Y2).

If ε belongs to FIRST(Y2), then FIRST(Y2Y3) = {FIRST(Y2 - ε } U FIRST(Y3)


• Similarly, we can expand the rule for any production rule
X→Y1 Y2 Y3 …. Yn
FOLLOW Function:
• Follow(A) is the set of terminal symbols that appear immediately to the right of A.
Example: S → aAc FIRST(S) = {a}
A → bd
FIRST(A) = {b}
35
Cont'd
Rules for Calculating Follow Function:
1. For the start symbol S, place $ in Follow(S)
Follow(S) = {$}
2. For any production A → αB
Follow(B) = FOLLOW{A}
3. For any production rule A → αBβ
• If ε does not belongs to FIRST(β), then Follow(B) = First(β)
• If ε belongs to FIRST(β), then Follow(B) = {First(β) - ε}U Follow(A)

36
Cont'd

Note
• ε may appear in the First function of a non terminal.
• ε will never appear in the Follow function of a non terminal.
• It is recommended to eliminate left recursion from the grammar if present before
calculating First and Follow functions.
• We will calculate the Follow function of a non terminal looking where it is present on
RHS of a production rule.

37
Cont'd
• Example 1: Calculate the First and Follow functions of the given grammar
Solution:
S → aBDh Step 1: Calculating First and Follow Function

B → cC FIRST FOLLOW
C → bc/ε
S {a} {$}
D → EF B {c} {g, f, h}
E → g/ε C {b, ε} {g, f, h}
F → f/ε D {g, f, ε} {h}
E {g, ε} {f, h}
F {f, ε} {h}

38
Cont'd
Step 2: Construct parse table using First and Follow function.

a b c f g h $

S S → aBDh
B B → cC
C C → bc C→ε C →ε C →ε
D D → EF D → EF D → EF
E E→ε E→g E→ε
F F→f F→ε

Note: Blank entries are errors.

39
Cont'd
Step 3: Stack Implementation Input string acbgh$

Stack Input Production


S → aBDh
S$ acbgh$ S → aBDh
B → cC aBDh$ acbgh$ Pop a
C → bc/ε BDh$ cbgh$ B → cC
cCDh$ cbgh$ Pop c
D → EF
CDh$ bgh$ C → bC
E → g/ε bCDh$ bgh$ Pop b
F → f/ε CDh$ gh$ C→ε
Dh$ gh$ D → EF
EFh$ gh$ E→g
gFh$ gh$ Pop g
Fh$ h$ F→ε
h$ h$ Pop h
$ $ accept 40
Cont'd

Step 4: Constructing Parse Tree

S
a h
B D
c E F
C
ε
b g
C

string acbgh
ε
41
Cont'd
• Example2: Calculate the First and Follow functions of the given grammar

• Solution: the given grammar is left recursive.


S→A
• So, first remove left recursion from the given
A → aB/Ad
B→b grammar
• After eliminating left recursion
C→g

• Left Recursive grammar S→A


A → aBA'
A' → dA'/ ε
B→b
C→g
• New grammar 42
Cont'd

First Function Follow Function


First(S) = First(A) = {a} Follow(S) = {$}
First(A) = {a} Follow(A) = Follow(S) = {$}
First(A' ) = {d, ε} Follow(A' ) = Follow(A) = {$)
First(B) = {b} Follow(B) = {First(A') – ε} U Follow(A) = {d, $}
First(C) = {g} Follow(C) = NA

Production First Follow


S→A {a} {$}
A → aBA' {a} {$}
A' → dA'/ ε {d, ε} {$}
B→b {b} {d, $}
C→g {g} NA 43
Cont'd
• Step 2: Construct parse table using First and Follow function.
Terminal Symbol
a b d g $
Non Terminal Symbol

S S →A

A A → aBA'

A' A' → dA' A' → ε

B B→b

C C→g

44
Cont'd
• Step 3: Stack Implementation by using Parsing Table.

Stack Input Production Input string abd$


S$ abd$ S →A
A$ abd$ A → aBA'
aBA' $ abd$ Pop a
BA' $ bd$ B→b
bA' $ bd$ Pop b
A' $ d$ A' → dA'
dA' $ d$ Pop d
A' $ $ A' → ε
$ $ Accept The Input string is properly parsed.
45
Cont'd
• Step 4: Generate Parse Tree using Stack Implementation following Top Down Approach.

a
B A'

b A'
String = abd
ε

46
Cont'd
• Example 3 : Show that the following grammar is LL(1).
S → AaAb |
BbBa
A→ε
B→ε
• Solution: now we will first construct FIRST and FOLLOW for the given grammar
FISRT FOLLOW a b $
S { a, b } {$} S S → AaAb S → BbBa

A {ε} {a,b} A A→ε A→ε


B {ε } {a,b}
B B→ε B→ε

Parsing Table for the above grammar

47
Cont'd
• Now consider the "ba" for Parsing:
Stack Input Production
S$ ba$ S → BbBa S
BbBa$ ba$ B →ε
bBa$ ba$ Pop b a
B B
Ba$ a$ B →ε b
a$ a$ Pop a
ε ε
$ $ Accept

• This shows that the given grammar is LL (1). • Parse Tree for the above given LL(1)
grammar

48
Cont'd
• Example 4 : Construct LL (1) parsing table for the following grammar.
S → aB | aC | Sd
| Se
B → bBc | f
C→g
• Solution: now we will first construct FIRST and FOLLOW for the given grammar
FISRT FOLLOW a b c d f g $
S S → aB
S {a} {d, e, $} S → aC
B { b, f } {c, d, e, $} B B → bBc B→f
C {g } {d, e, $}}
C C→g

• The above table shows multiple entries in table [S, a].


• This shows that the given grammar is not LL (1). 49
Bottom-Up Parser
– In bottom-up parsing method, the input string is taken first and we try
to reduce this string with the help of grammar and try to obtain the
start symbol.
– The process of parsing halts successfully as soon as we reach to start
symbol.
– The parse tree is constructed from bottom to up that is from leaves
to root.
– In this process, the input symbols are placed at the leaf nodes after
successful parsing.
– The bottom-up parse tree is created starting from leaves, the leaf
nodes together are reduced further
AU to internal nodes, these internal
50
Bottom-Up Parser Cont.…
– In this process, basically parser tries to identify RHS of the production
rule and replace it by corresponding LHS.
– This activity is called reduction.
– Thus, the crucial but prime task in bottom-up parsing is to find the
productions that can be used for reduction.
– The bottom-up parse tree construction process indicates that the
tracing of derivations is to be done in reverse order.
– Example 1: – The input string is float
id, id;

AU 51
Bottom-Up Parser Cont.…
– Now constructing Parse Tree using bottom-up manner is
as follows: – Step5: Reducing id to L. L → id
– Step1: We will start from leaf node.

– Step2: – Step 6: Read next string from


input
– Step3: Reducing float to T. T → float

– Step 7: Read next string from


input
– Step 4: Read next string from input
AU 52
Bottom-Up Parser Cont.…
– Step 8: gets reduced.
– Step 10: The sentential form
produced while constructing this
parse tree is:

– Step 9: gets reduced.

– Step 11: Thus, looking at sentential


form we can say that the rightmost
Figure: Bottom-up parsing derivation in reverse order is
AU 53
Bottom-Up Parser Cont.…
– Thus, basic steps in bottom-up parsing are:
1. Reduction of input string to start symbol
2. The sentential forms that are produced in the reduction process
should trace out rightmost derivation in reverse.
– As said earlier, the crucial task in bottom up parsing is to find the
substring that could be reduced by appropriate non terminal.
– Such a substring is called handle.
– In other words, handle is a string of substring that matches the right
side of production and we can reduce such string by a non-terminal on
left hand side production.
– Such reduction represents one step
AU along the reverse of rightmost
54
Bottom-Up Parser Cont.…
– Formally we can define handle as follow:
– Handle of right sentential form ɤ is a production A → β a position of ɤ
where the string β may be found and replaced by A to produce the
previous right sentential form in rightmost derivation of ɤ.
– For example: Consider the grammar
– Now consider the string id + id + id and the rightmost derivation is
– The bold strings are called handles.

– Thus, bottom parser is essentially


a process of detecting handles and
using them in reduction.
AU 55
Shift Reduce Parser
– Shift reduce parser attempts to construct parse tree from leaves to root.
– Thus, it works on the same principal of up parser.
– A shift reduce parser requires following data structures:
1. The input buffer storing the input string
2. A stack for storing and accessing the LHS and RHS of rules.
– The initial configuration of Shift reduce parser is as shown below:

– The parser performs following basic operations.


1. Shift: Moving of the symbols from input buffer onto the stack, this action is
called shift.
2. Reduce: if the handle appears on the
AU top of the stack then reduction 56of it
Shift Reduce Parser Cont.…
• That means RHS of rule is popped of and LHS is pushed in.
• This action is called Reduce action.
3. Accept: if the stack contains start symbol only and input buffer is empty
at the same time then that action is called accept.
• When accept state is obtained in that process of parsing then it means a
successful parsing is done.
4. Error: A situation in which parser cannot either shift or reduce the
symbols, it cannot even perform the accept action is called as error.
– Example1: Consider the grammar
• Perform shift-Reduce parsing of the input string id1 - id2 * id3

AU 57
Shift Reduce Parser Cont.…
– Solution: • Construct parse tree
using bottom-up
E
manner

E - E
E
id E *
id
id

– Here we have followed two rules


1. If the incoming operator has more priority than in stack operator then perform
shift.
AU 58
2. If in stack operator has same or less priority than the priority of
Shift Reduce Parser Cont.…
– Example 2: Consider the grammar
• Parse the input string int id, id; using shift reduce parser.
– Solution:
• Construct parse tree
using bottom-up
manner
S

T L ;

int L ,
id
id

AU 59
Shift Reduce Parser Cont.…
E → 2E2
– Example 3: Consider the grammar
E → 3E3
• Parse the input string 32423 using shift reduce parser. E →4
– Solution:
Stack Input Parsing Action • Construct parse tree
Buffer using bottom-up
$ 32423$ Shift 3 manner
E
$3 2423$ Shift 2
$32 423$ Shift 4 3 3
E
$324 23$ Reduce by E → 4
$32E 23$ Shift 2 2
2 E
$32E2 3$ Reduce by E → 2E2
$3E 3$ Shift 3
4
$3E3 $ Reduce by E → 3E3
AU 60
$E $ Accept
Operator Precedence Parsing
– Any grammar G isa called an operator precedence grammar it
meets the following two condition:
1. There Exist no production rule which contains ε (epsilon) on its right-
hand side (RHS).
2. There Exist no production rule which contains two non-terminal
adjacent to each other on the its right-hand side(RHS)
– A parser that reads and understand an operator precedence grammar is
called as operator precedence parser.
E → EAE
– Example 1: Which | (E) | -E | id
is not operator precedence grammar
A→ + | - | * | / | ^

E Which
– Example 2: → E + is
E | operator
E – E | E *precedence
E|E/E|E^ E | (E) | -E | id
grammar
AU 61
Operator Precedence Parsing Cont.…
– Operator precedence can be established between the terminals of
the grammar.
– It ignores the non-terminals.
– Parsing action
1. Both end of the given input string, add the $ symbol.
2. Now scan the input from left to right until the > is encountered.
3. Scan towards left over all the equal precedence until the first
leftmost < is encountered.
4. Everything between leftmost < and rightmost > is handle.
5. $ and $ means parsing is successful.

AU 62
Operator Precedence Parsing Cont.…
– There are three operator precedence relations
1. a > b is terminal a has higher precedence than b .
2. a < b is terminal a has lower precedence than b .
3. a = b is terminal a and b have same precedence .
Rules
– Precedence table
+ * ( ) id $ id, a, b, c is
high.
+ > < < < < >
$ is low
* > < < > < > +>+
( < < < = < x *>*
) > > x > x > id ≠ id
id > > x > x > $ Accept $
$ < < < x < A
AU 63
Operator Precedence Parsing Cont.…
– Example1: Consider for the following grammar and construct the operator
precedence parser, then parse the following string : id+id*id

E → EAE | id
– Solution: A→ + | *
• Step1: convert the given grammar to operator precedence grammar
E → E + E | E + E | id
• Step 2: Construct the operator precedence table, terminal symbols are
{id, +, *, $} Id + * $
• Relation table id > > >
+ < > < >
* < > > >
$ < < < A
AU 64
Operator Precedence Parsing Cont.…
• Step3: parse the given string id+id*id
stack Relation Input Action
$ < id+id*id$ Shift id
$id > +id*id$ Reduce by E → id
$E < +id*id$ Shift +
$E+ < id*id$ Shift id
$E+id > *id$ Reduce by E → id
$E+E < *id$ Shift *
$E+E* < id$ Shift id
$E+E*id > $ Reduce by E → id
$E+E*E > $ Reduce by E → E * E
$E+E > $ Reduce by E → E + E
$E A $ Accept
AU 65
LR Parser

Reading Assignment!!!
Thank You!!!

66
End of Part 1!!!
Thank You!!!

You might also like