cd_pbera (1)
cd_pbera (1)
Subjectcode:CO302
Submitted to:
Mr. Lavendra Gautam
Submitted by:
Pritam Bera (2K22/CO/347)
B. Tech CSE’ 2026
INDEX
AIM:
Program to convert NFA to DFA
THEORY:
A Non-Deterministic Finite Automaton (NFA) is a mathematical model consisting of a set of states, a
set of input symbols, a transition function, an initial state, and a set of accepting states. Unlike a
Deterministic Finite Automaton (DFA), an NFA allows transitions to multiple states from a single
state on the same input symbol.
The conversion of an NFA to a DFA involves creating a DFA that simulates the behavior of the given
NFA. This process typically involves the following steps:
Epsilon Closure (ε-Closure): In an NFA, ε-transitions allow moving from one state to another without
consuming any input. The ε-closure of a state is the set of states reachable from that state using ε-
transitions. Computing the ε-closure for each state is essential for subsequent steps.
Subset Construction: The Subset Construction algorithm is used to construct a DFA from the given
NFA. In this process, each state of the DFA corresponds to a set of states from the NFA. The initial
state of the DFA is the ε-closure of the initial state of the NFA, and transitions between states are
determined based on the transitions of the NFA.
State Equivalence: During the subset construction process, it's crucial to identify equivalent states in
the DFA to minimize the number of states and optimize the DFA's size. States are considered
equivalent if they represent the same set of states from the NFA and lead to the same set of states on
the same input symbols.
DFA Minimization (Optional): Once the DFA is constructed, it can be further optimized by
minimizing the number of states while preserving its functionality. DFA minimization algorithms,
such as the Hopcroft's algorithm or the Moore's algorithm, can be applied to achieve this
optimization.
CODE:
std::string NFA_FILE;
std::string buffer;
int zz = 0;
int last_index = 0;
std::ifstreamfp;
int symbols;
buffer.clear();
state(closure_ar, buffer);
closure_table[i] = buffer;
std::cout<<closure_table[i] << "\n";
}
}
if (temp != "-") {
for (char d : temp) {
int k = d - 'A';
temp2 = clsr_t[k];
check(arr, temp2);
}
}
}
temp.clear();
state(arr, temp);
TB = temp.empty() ? "-" : temp;
}
// Driver Code
int main() {
int states;
std::string T_buf;
dfa_states[last_index - 1].count = 1;
buffer.clear();
buffer = closure_table[0];
dfa_states[last_index++].states = buffer;
int Sm = 1, ind = 1;
int start_index = 1;
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Understand the differences between Non-Deterministic Finite Automata (NFA) and
Deterministic Finite Automata (DFA).
2. Implement the ε-closure algorithm to compute the ε-closure of states in an NFA.
3. Apply the Subset Construction algorithm to convert an NFA to an equivalent DFA.
4. Identify equivalent states in the constructed DFA and understand the importance of state
equivalence in DFA optimization.
5. Optionally, apply DFA minimization algorithms to optimize the size of the DFA obtained from
the conversion process.
6. Analyze and evaluate the efficiency and effectiveness of the conversion process in terms of
time complexity and space complexity.
EXPERIMENT - 2
AIM:
Program to build a DFA to accept strings that start and end with same character(given a string of
characters a & b)
THEORY:
A Deterministic Finite Automaton (DFA) is a mathematical model consisting of a finite set of states,
a finite set of input symbols (alphabet), a transition function, an initial state, and a set of accepting
states. DFAs are used to recognize or accept strings that belong to a particular language.
To construct a DFA that accepts strings starting and ending with the same character, we need to
define the following:
1. States: The DFA will have states representing different scenarios of the input string.
2. Alphabet: The input alphabet consists of the characters 'a' and 'b'.
3. Transition Function: Define the transitions between states based on the input characters.
4. Initial State: The starting state of the DFA.
5. Accepting States: States where the DFA accepts the input string.
The construction of the DFA involves the following steps:
1. Define States: Typically, the DFA will have states representing different scenarios of the input
string, such as "start with 'a'", "start with 'b'", "end with 'a'", "end with 'b'", and "other scenarios".
2. Transition Function: Determine the transitions between states based on the input characters 'a' and
'b'. For example, if the current state represents "start with 'a'", and the next input character is 'a',
transition to a state representing "start with 'a'". Similarly, define transitions for other scenarios.
3. Initial State: Define the initial state of the DFA, usually representing the beginning of the string.
4. Accepting States: Specify the accepting states of the DFA, representing strings that start and end
with the same character.
CODE:
#include <bits/stdc++.h>
using namespace std;
// States of DFA
void q1(string, int);
void q2(string, int);
void q3(string, int);
void q4(string, int);
// State transitions
// 'a' takes to q1, and
// 'b' takes to q2
if (s[i] == 'a')
q1(s, i + 1);
else
q2(s, i + 1);
}
// State transitions
// 'a' takes to q1, and
// 'b' takes to q2
if (s[i] == 'a')
q1(s, i + 1);
else
q2(s, i + 1);
}
// State transitions
// 'a' takes to q4, and
// 'b' takes to q3
if (s[i] == 'a')
q4(s, i + 1);
else
q3(s, i + 1);
}
// State transitions
// 'a' takes to q4, and
// 'b' takes to q3
if (s[i] == 'a')
q4(s, i + 1);
else
q3(s, i + 1);
}
// State transitions
// 'a' takes to q1, and
// 'b' takes to q3
if (s[i] == 'a')
q1(s, i + 1);
else
q3(s, i + 1);
}
// Driver Code
int main()
{
string s = "abbaabb";
OUTPUT:
Created by – rahul(355)
LEARNING OUTCOME:
1. Understand the concept of Deterministic Finite Automata (DFA) and its components.
2. Design a DFA to recognize strings that start and end with the same character.
3. Implement the transition function for the DFA based on the input alphabet.
4. Define the initial state and accepting states of the DFA.
5. Analyze the behavior of the DFA and its ability to recognize valid strings from the given
language.
6. Evaluate the correctness and efficiency of the DFA construction process.
EXPERIMENT - 3
AIM:
Program to detect tokens in a Program
(Eg-Keywords,operators, identifiers etc)
THEORY:
Tokenization is the process of breaking a sequence of characters into smaller units called tokens. In
programming languages, tokens represent the basic building blocks of the language, such as keywords,
operators, identifiers, literals, and punctuation symbols.
CODE:
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
if (len == 0)
return (false);
for (i = 0; i<len; i++) {
if (str[i] != '0' && str[i] != '1' && str[i] != '2'
&& str[i] != '3' && str[i] != '4' && str[i] != '5'
&& str[i] != '6' && str[i] != '7' && str[i] != '8'
&& str[i] != '9' || (str[i] == '-' &&i> 0))
return (false);
}
return (true);
}
if (len == 0)
return (false);
for (i = 0; i<len; i++) {
if (str[i] != '0' && str[i] != '1' && str[i] != '2'
&& str[i] != '3' && str[i] != '4' && str[i] != '5'
&& str[i] != '6' && str[i] != '7' && str[i] != '8'
&& str[i] != '9' && str[i] != '.' ||
(str[i] == '-' &&i> 0))
return (false);
if (str[i] == '.')
hasDecimal = true;
}
return (hasDecimal);
}
right++;
left = right;
} else if (isDelimiter(str[right]) == true &&left != right
|| (right == len&&left != right)) {
char* subStr = subString(str, left, right - 1);
if (isKeyword(subStr) == true)
printf("'%s' IS A KEYWORD\n", subStr);
// DRIVER FUNCTION
int main()
{
// maximum length of string is 100 here
char str[100] = "int a = b + 1c; ";
return (0);
}
OUTPUT:
Created by – rahul(355)
LEARNING OUTCOME:
1. Understand the concept of tokenization and its importance in programming language processing.
2. Identify and define different types of tokens, including keywords, operators, identifiers, literals,
and punctuation symbols.
3. Implement algorithms to tokenize input source code by scanning characters and recognizing
token patterns.
4. Categorize tokens into appropriate types based on predefined rules and patterns.
5. Handle special cases and edge cases in token detection, such as handling escape characters in
string literals, nested expressions, etc.
EXPERIMENT - 4
AIM:
Write a program to implement lexical analyser
THEORY:
A lexical analyzer, also known as a lexer or scanner, is the first phase of a compiler that breaks the input
source code into a sequence of tokens. These tokens represent the basic building blocks of the programming
language and serve as input to the subsequent phases of the compiler.
The process of lexical analysis involves scanning the input source code character by character, recognizing and
categorizing sequences of characters into different token types based on predefined rules and patterns.
CODE:
// C++ program to illustrate the implementation of lexical
// analyser
#include<bits/stdc++.h>
#include <cctype>
#include <iostream>
#include <string>
#include <vector>
right++;
left = right;
}
else if (isDelimiter(input[right]) &&left != right
|| (right == len&&left != right)) {
std::string subStr
= getSubstring(input, left, right - 1);
if (isKeyword(subStr))
std::cout<< "Token: Keyword, Value: " <<subStr<< std::endl;
else if (isInteger(subStr))
std::cout<< "Token: Integer, Value: " <<subStr<< std::endl;
else if (isValidIdentifier(subStr)
&& !isDelimiter(input[right - 1]))
std::cout<< "Token: Identifier, Value: " <<subStr<< std::endl;
else if (!isValidIdentifier(subStr)
&& !isDelimiter(input[right - 1]))
std::cout<< "Token: Unidentified, Value: " <<subStr<< std::endl;
left = right;
}
}
}
// main function
int main()
{
// Input 01
std::string lex_input = "int a = b * d + c";
std::cout<< "For Expression \"" <<lex_input<< "\":" << std::endl;
lexicalAnalyzer(lex_input);
std::cout<< " \n";
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Understand the role and importance of lexical analysis in the compilation process.
2. Define token types and their corresponding patterns using regular expressions or finite automata.
3. Implement a lexical analyzer capable of tokenizing input source code into keywords, identifiers,
literals, operators, and punctuation symbols.
4. Handle special cases and edge cases in token recognition, such as handling escape characters in
string literals, nested expressions, etc.
5. Test and debug the lexical analyzer to ensure accurate tokenization of input source code.
EXPERIMENT –5
AIM:
Write a program to implement the recursive descent parser.
THEORY:
Recursive descent parsing is a top-down parsing technique where a set of recursive procedures is
used to process the input. Each procedure typically corresponds to a non-terminal symbol in the
grammar. The parser begins with the start symbol and recursively applies the production rules to
match the input against the grammar.
ALGORITHM:
1. Define grammar rules for the language to be parsed.
2. Implement recursive procedures for each non-terminal symbol in the grammar.
3. Start parsing from the start symbol using the corresponding recursive procedure.
4. Apply the production rules recursively to match the input against the grammar.
5. Handle syntax errors appropriately.
CODE:
#include <stdio.h>
#include <string.h>
#define SUCCESS 1
#define FAILED 0
int main() {
puts("Enter the string");
scanf("%s", string);
cursor = string;
puts("");
puts("Input Action");
puts("--------------------------------");
int E() {
printf("%-16s E -> T E'\n", cursor);
if (T()) {
if (Edash())return SUCCESS;
elsereturn FAILED;
}
elsereturn FAILED;
}
int Edash() {
if (*cursor == '+') {
printf("%-16s E' -> + T E'\n", cursor);
cursor++;
if (T()) {
if (Edash()) return SUCCESS;
elsereturn FAILED;
}
elsereturn FAILED;
}
else {
printf("%-16s E' -> $\n", cursor);
return SUCCESS;
}
}
int T() {
printf("%-16s T -> F T'\n", cursor);
if (F()) {
if (Tdash()) return SUCCESS;
elsereturn FAILED;
}
elsereturn FAILED;
}
int Tdash() {
if (*cursor == '*') {
printf("%-16s T' -> * F T'\n", cursor);
cursor++;
if (F()) {
if (Tdash())return SUCCESS;
elsereturn FAILED;
}
elsereturn FAILED;
}
else {
printf("%-16s T' -> $\n", cursor);
return SUCCESS;
}
}
int F() {
if (*cursor == '(') {
printf("%-16s F -> ( E )\n", cursor);
cursor++;
if (E()) {
if (*cursor == ')') {
cursor++;
return SUCCESS;
}
elsereturn FAILED;
}
elsereturn FAILED;
}
else if (*cursor == 'i') {
printf("%-16s F ->i\n", cursor);
cursor++;
return SUCCESS;
}
elsereturn FAILED;
}
OUTPUT:
LEARNING OUTCOME:
1. Understand the principles of recursive descent parsing.
2. Gain hands-on experience in implementing a recursive descent parser.
3. Learn how to define grammar rules and corresponding recursive procedures.
4. Develop skills in handling syntax errors during parsing.
EXPERIMENT - 6
AIM:
Program to eliminate left factoring in the given grammar
THEORY:
Left factoring is a process used to remove common prefixes from the productions of a grammar.
When a grammar contains productions with common prefixes, it can lead to parsing ambiguities and
increase the complexity of parsing algorithms. Left factoring simplifies the grammar by introducing
new non-terminals to eliminate these common prefixes.
The general steps for eliminating left factoring from a grammar are as follows:
CODE:
#include <iostream>
#include <vector>
#include <string>
if (foundCommon) {
commonPrefixes.push_back(prod.substr(0, 1)); // Take single character as prefix
} else {
nonCommonSuffixes.push_back(prod.substr(1)); // Exclude the common prefix as suffix
}
}
newGrammar.push_back(newRule);
} else {
// If no common prefixes found, keep the rule unchanged
newGrammar.push_back(rule);
}
}
int main() {
// Example grammar
vector<ProductionRule> grammar = {
{'S', {"abc", "abcd", "abef"}},
{'A', {"abc", "abcd", "abef"}}
};
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Understand the concept of left factoring and its importance in grammar simplification.
2. Identify common prefixes in productions and recognize the need for left factoring.
3. Implement an algorithm to eliminate left factoring from a given grammar.
4. Modify productions and introduce new non-terminals to achieve left factoring.
5. Verify the correctness of the left factoring transformation and ensure that the resulting grammar
retains its original language.
EXPERIMENT - 7
AIM:
Write a program to convert left recursive to right recursive grammer
THEORY:
Left recursion in a grammar occurs when a non-terminal directly or indirectly produces itself as the
first symbol in a production. This can lead to parsing challenges, including infinite loops and
ambiguity. To mitigate these issues, left recursion must be eliminated. This process involves creating
new non-terminals and rewriting production rules to remove the left recursion. By replacing left-
recursive productions with non-left-recursive alternatives, the grammar becomes suitable for parsing
algorithms, ensuring reliable and efficient parsing processes without encountering infinite loops or
ambiguity.
ALGORITHM:
1. . Identify left-recursive productions in the grammar.
2. For each left-recursive non-terminal:
- Create a new non-terminal symbol.
- Rewrite the productions to eliminate left recursion.
3. Update the grammar with the new non-terminals and productions.
4. Repeat steps 2 and 3 until all left recursion is eliminated
CODE:
#include <iostream>
#include <string>
#include <vector>
#include <map>
using namespace std;
if (!alpha.empty()) {
char new_A = A + '\'';
newGrammar[new_A] = alpha;
newGrammar[A] = (beta.empty()) ? vector<string>{"ε"} : beta;
for (string&prod :newGrammar[A]) {
prod += new_A;
}
} else {
newGrammar[A] = grammar.at(A);
}
}
return newGrammar;
}
int main() {
map<char, vector<string>> grammar = {
{'E', {"E+T", "T"}},
{'T', {"T*F", "F"}},
{'F', {"(E)", "id"}}
};
return 0;
}
OUTPUT :
Created by – rahul(355)
LEARNING OUTCOME:
1. Understand the concept of left recursion in grammars.
2. Learn the algorithm to eliminate left recursion.
3. Gain hands-on experience in implementing a program to eliminate left recursion in a
grammar.
4. Learn how to update grammars to remove left recursion while preserving their original
structure.
EXPERIMENT – 8
AIM:
Program to find first and follow of the given grammar.
THEORY:
FIRST Set:
The FIRST set of a symbol (either a terminal or a non-terminal) in a grammar is the set of terminals
that begin the strings derivable from that symbol. In other words, it represents all the possible
terminals that can appear as the first symbol of a string generated from the given symbol.
FOLLOW Set:
The FOLLOW set of a non-terminal symbol in a grammar is the set of terminals that can immediately
follow the occurrences of that non-terminal in any derivation of a string.
These sets are crucial in constructing predictive parsers, which are used to parse strings based on
context-free grammars. They help in determining the appropriate production rules to use during
parsing.
CODE:
// C program to calculate the First and
// Follow sets of a given grammar
#include <ctype.h>
#include <stdio.h>
#include <string.h>
int count, n = 0;
int kay;
char done[count];
int ptr = -1;
if (xxx == 1)
continue;
// Function call
findfirst(c, 0, 0);
ptr += 1;
if (first[i] == calc_first[point1][lark]) {
chk = 1;
break;
}
}
if (chk == 0) {
printf("%c, ", first[i]);
calc_first[point1][point2++] = first[i];
}
}
printf("}\n");
jm = n;
point1++;
}
printf("\n");
printf("-----------------------------------------------"
"\n\n");
char donee[count];
ptr = -1;
// Checking if Follow of ck
// has already been calculated
for (kay = 0; kay <= ptr; kay++)
if (ck == donee[kay])
xxx = 1;
if (xxx == 1)
continue;
land += 1;
// Function call
follow(ck);
ptr += 1;
void follow(char c)
{
int i, j;
AIM:
Write a program to construct LL(1) parsing table
THEORY: A top-down parser builds the parse tree from the top down,starting with the start
non-terminal.This is a type of top down parser. Here the 1stLrepresents that thescanning of
the Input will be done from Left to Right manner and secondL shows that in this Parsing
technique we are going to use Left most Derivation Tree. And finally the1represents the
number of look ahead, means how many symbols are we going to see when you want to
make a decision.
ALGORITHM:
CODE:
#include<iostream>
#include<bits/stdc++.h>us
ing namespace std;
stringfirst(map<char,vector<string>>m,charch){
if(ch=='^'||!(ch>='A'&&ch<='Z'))
return ch+"";
stringans="";
for(int
i=0;i<m[ch].size();i++
){ string s = m[ch][i];
bool checknext=true;
for(intj=0;j<s.length()&&checknext;j++){
checknext= false;
if(s[j]=='^'||!(s[j]>='A'&&s[j]<='Z')){
if(ans.find(s[j])==string::npos)
ans.push_back(s[j]);
}
else
{
stringtemp= first(m,s[j]);
for(int k=0;k<temp.length();k++){
if(temp[k]=='^')
checknext=true;
else if(ans.find(temp[k])==string::npos)
ans.push_back(temp[k]);
}
if(checknext&&j==s.length()-1)
ans.push_back('^');
}
}
returnans;
}
stringfirstofstring(map<char,vector<string>>m,strings){
stringans="";
boolchecknext=true;
for(intj=0;j<s.length()&&checknext;j++){
checknext= false;
if(s[j]=='^'||!(s[j]>='A'&&s[j]<='Z')){
if(ans.find(s[j])==string::npos)
ans.push_back(s[j]);
}
else
{
stringtemp= first(m,s[j]);
for(int k=0;k<temp.length();k++){
if(temp[k]=='^')
checknext=true;
else if(ans.find(temp[k])==string::npos)
ans.push_back(temp[k]);
}
if(checknext&&j==s.length()-1)
ans.push_back('^');
}
}
if(ans=="")
return"^";
return ans;
}
stringfollow(map<char,vector<string>>prod,charstart,charch){
string ans="";
if(start==ch)
ans.push_back('$');
for(map<char,vector<string>>::iteratori=prod.begin();i!=prod.end();i++){
for(intj=0;j<i->second.size();j++){
if(i->second[j].find(ch)==string::npos)
continue;
string temp="";
boolparent=false;
for(int a=0;a<i->second[j].length();a++){
parent=false;
if(i->second[j][a]==ch){
if(a==i->second[j].length()-
1){ parent=true;
}else{
//cout<<"substr
"<<i->second[j].substr(a,i->second[j].length()-a-1)<<endl;
temp =
firstofstring(prod,i->second[j].substr(a+1,i->second[j].length()-a-1));
//cout<<"temp"<<temp;
for(int k=0;k<temp.length();k++){
if(temp[k]=='^'){
parent=true;
continue;
}
if(ans.find(temp[k])==string::npos)
ans.push_back(temp[k]);
}
}
if(parent){
//totacklethecasewhenparentissameas'ch'if(ch==
i->first)
continue;
temp=follow(prod,start,i->first);
for(intk=0;k<temp.length();k++){
if(ans.find(temp[k])==string::npos)
ans.push_back(temp[k]);
}
}
returnans;
}
intmain(){
map<char,vector<string>>prod; int
n;
intt;
cout<<"Enternumberofterminals:"; cin>>t;
map<char,int>terminals;
for(int i=0;i<t;i++){
char ch;
cin>>ch
;
terminals[ch]=i;
}
terminals['$']=t;
cout<<"Enternumberofproductions:";
cin>>n;
cout<<"Enterproductions:"<<endl;
char start;
for(inti=0;i<n;i++){
char ch;
string s;
cin>>ch;
if(i==0)
start=ch;
cout<<"->";
cin>>s;
prod[ch].push_back(s);
}
cout<<"Givengrammaris:"<<endl<<endl;
for(map<char,vector<string>>::iteratori=prod.begin();i!=prod.end();i++){
cout<<i->first<<" ->";
for(int j=0;j<i-
>second.size();j++){
cout<<i->second[j];
if(j!=i->second.size()-
1) cout<<" | ";
}
cout<<endl;
}
if(hasnull){
stringtemp=follow(prod,start,i->first); for(int
k=0;k<temp.length();k++){
stringss="";
ss+=i->first;
ss+="->^";
table[i->first-'A'][terminals[temp[k]]]=ss;
}
cout<<endl<<endl;
cout<<"Nonterminals\\Terminals|";
vector<char> vv(t+1);
for(map<char,int>::iterator i= terminals.begin(); i!=terminals.end();i++)
vv[i->second]=i->first;
for(int i=0;i<vv.size();i++)
cout<<" "<<vv[i]<<" |";
cout<<endl;
intii=0;
for(map<char, vector<string>>::iterator i=prod.begin(); i!=prod.end(),ii<table.size();
i++,ii++){
cout<<endl;
}
Created by – rahul(355)
EXPERIMENT-10
AIM: WAP to implement non-recursive predictive parser
THEORY: Non-Recursive Predictive Parsing uses a parsing table that shows which production rule
to select from several alternatives available for expanding a given non-terminal and the first terminal
symbol that should be produced by that non-terminal.
The parsing table consists of rows and columns where there are two for each non-terminal and a
column for each terminal symbol, including $, the end marker for the input string. Each entry M[A,
a] in a table is either a production rule or an error.
It uses a stack containing a sequence of grammar symbols with the $ symbol placed at the bottom,
indicating the bottom of the stack. Initially, the start symbol resides on top. The stack is used to keep
track of all the non-terminals for which no prediction has been made yet.
The parser also uses an input buffer and an output stream. The string to be parsed is stored in the
input buffer. The end of the buffer uses a $ symbol to indicate the end of the input string
CODE:
#include<bits/stdc++.h>us
ing namespace std;
map<char,map<char,int>>M={
{'E', {{'a', 0}, {'(', 0}}},
{'D', {{'+', 1}, {')', 2}, {'$', 2}}},
{'T', {{'a', 3}, {'(', 3}}},
{'S', {{'*', 4}, {'+', 5}, {')', 5}, {'$', 5}}},
{'F', {{'(', 6}, {'a', 7}}},
};
map<int,string>Grammar={
{0, "E=TD"},
{1, "D=+TD"},
{2, "D=$"},
{3, "T=FS"},
{4, "S=*FS"},
{5, "S=$"},
{6, "F=(E)"},
{7, "F=a"}
};
voidprintStack(vector<char>s)
{
cout<<endl;
for(inti=0;i<s.size();i++) cout
<< s[i] <<'';
}
voidParse(strings)
{
vector<char> stk;
stk.push_back('#');
stk.push_back('E');
charpeak=stk.back();
int i = 0;
printStack(stk);
while(stk.size() > 0)
{
if(stk.size()<=3)
cout<<"\t\t\tinputs:"<<s.substr(i); else
cout<<"\t\tinputs:"<<s.substr(i); if
(peak == s[i])
{
if(stk.size()<=3)
cout<<"\t\t\t\tMatchedID";
else
cout<<"\t\t\tMatchedID"; stk.pop_back();
i++;
}
else if (islower(peak))
{
cout<<"Errorinparsing,expected:"<<peak<<"got:"<<s[i]<<endl;
return;
}
elseif(M[peak].find(s[i])==M[peak].end())
{
cout<<"Errorinparsing,invalidexpression"<<M[peak][s[i]]<<endl; return;
}
else
{
charpp=peak;
stk.pop_back();
stringprod=Grammar[M[pp][s[i]]];
int z = prod.length() - 1;
if(stk.size()<=3)
cout<<"\t\t\tAction:"<<prod; else
cout <<"\t\tAction : "<< prod;
while(prod[z]!='='&&prod[z]!='$')
{
stk.push_back(prod[z]);
z--;
}
//cout<<endl;
}
peak=stk.back();
printStack(stk);
if (peak == '#'&& s[i] == '$')
{
cout<<"Parsedsuccessfully"<<endl;
return ;
}
}
intmain(){
cout<<"Entersomeexpression:"; string
expr;
cin >>expr;
if(expr.back()!='$')
expr += "$";
Parse(expr);}
EXPERIMENT-11
THEORY: Error handling in parsing involves detecting and recovering from syntax errors
encountered during the parsing process. When a parser encounters an error, it tries to identify the
point of failure and take appropriate action to continue parsing or report the error.
Error handlers can employ strategies like panic mode recovery, where the parser skips input until it
finds a synchronization point (e.g., a specific terminal symbol), or error productions, where the
parser uses special rules to synchronize its state with the input.
Effective error handling enhances parser robustness and helps provide meaningful feedback to users
about syntax errors in their input.
CODE:
#include <vector>
#include <string>
#include <sstream>
#include<unordered_set>
#include <iostream>
namespace std {
classErrorHandler{
public:
void handleError(const string& message, int lineNumber) {
stringstream errorStream;
errorStream <<"Error on line "<< lineNumber <<": "<< message <<endl;
errors_.push_back(errorStream.str());
}
vector<string>getErrors()const{
return errors_;
}
private:
vector<string>errors_;
};
unordered_set<string>declaredVariables;
}
int main() {
using namespace std;
ErrorHandlererrorHandler;
stringstreamcodeStream;
string line;
int lineNumber = 1;
while (getline(cin, line)) {
codeStream<<line<<'\n';
lineNumber++;
}
lineNumber++;
}
if(errorHandler.hasErrors()){
cout <<"Compilation errors:"<<endl;
for(constauto&error:errorHandler.getErrors()){ cout
<< error;
}
} else {
cout <<"Compilation successful!"<<endl;
}
return 0;
}
OUTPUT:
EXPERIMENT-12
AIM:Write a program to implement a one pass compiler.
THEORY: A one-pass compiler is a compiler that traverses the source code exactly once, converting
it into executable code without the need for multiple passes or intermediate representations.
It typically performs lexical analysis, syntax analysis, semantic analysis, code generation, and
optimization in a single pass. One-pass compilers are memory-efficient and suitable for small and
medium-sized programs, but may lack certain optimizations and advanced features compared to
multi-pass compilers.
They are commonly used in embedded systems, scripting languages, and environments where
simplicity and speed are prioritized over advanced optimizations.
CODE:
#include<iostream>
#include <string>
#include <unordered_map>
#include <sstream>
#include <vector>
usingnamespacestd;
//Enumfortokentypes
enum class TokenType
{
KEYWORD,
IDENTIFIER,
OPERATOR,
LITERAL,
DELIMITER,
COMMENT
};
//Tokenstructure
struct Token
{
TokenTypetype;
string value;
};
class OnePassCompiler
{
public:
// Constructor
OnePassCompiler(){}
private:
// Process each line of source code
voidprocessLine(conststring&line)
{
stringstreamss(line);
string token;
// Process token
voidprocessToken(constToken&token)
{
// Implement actions based on token type
//Forsimplicity,printtokenvalueandtype
cout<<"Token:"<<token.value<<"Type:"<<static_cast<int>(token.type)<< endl;
}
};
int main()
{
//Instantiatetheonepasscompiler
OnePassCompiler compiler;
return 0;
}
OUTPUT: