A
Seminar Report
On
Operating System
from
Diploma in Computer Engineering
Submitted by
Master Akshay M. Lunawat
Guided by
PROF. M. B. Dahiwal
DEPARTMENT OF COMPUTER ENGINEERING
GOVERNMENT POLYTECHNIC YAVATMAL
YEAR 2017 -2018
Content: -
1. Introduction
2. Functions
3. Evolution & Generation:
a)First Generation (1945-1955)
b) Second Generation (1955-1965)
c) Third Generation (1965 - 1980)
d) Fourth Generation (1980 - 1990)
4. Conclusion
Introduction
Before Knowing Operating System lets us know what is the reason
behind of making operating system. In the 1940’s, the earliest
electronic digital systems had no operating systems. Computers of this
time were so primitive compared to those of today that programs
were often entered into the computer one bit at a time on rows of
mechanical switches. Users soon realized that they could cut down the
amount of time wasted between the jobs, if they could be automated
the job-to-job transition. First major such systems, considered by
many to be the first operating system, was designed by the General
Motors Research Laboratories for their IBM 701 mainframe beginning
in early 1956. Its success helped establish batch computing – the
groupings of the jobs into a single deck of cards, separated by control
cards that instructed computer about various specification of jobs.
Functions
Operating Systems performs various functions. Most of the functions
of operating system is done at kernel level and other at user level.
The Major functions of operating systems are:
1.The allocation of resources needed to execute programs is done by
identifying: the programs that are running, the need for memory,
peripheral devices and data protection requirements.
2. Facilities for data compression, sorting, mixing, cataloguing and
maintenance of libraries, through utility programs available.
3. Plan implementation works according to certain criteria, for
efficient use of central processing unit.
4. Assisting implementation of programs through computer-user
communication system, at both hardware and software level.
Generation and Evolution
First Generation (1945-1955)
After Babbage's unsuccessful efforts, little progress was made in
constructing digital computers until World War II. Around the mid-
1940s, Howard Aiken at Harvard, John von Neumann at the Institute
for Advanced Study in Princeton, J. Presper Eckert and William
Mauchley at the University of Pennsylvania, and Konrad Zuse in
Germany, among others, all succeeded in building calculating engines.
The first ones used mechanical relays but were very slow, with cycle
times measured in seconds. Relays were later replaced by vacuum
tubes. These machines were enormous, filling up entire rooms with
tens of thousands of vacuum tubes, but they were still millions of times
slower than even the cheapest personal computers available today.
By the early 1950s, the routine had improved somewhat with the
introduction of punched cards. It was now possible to write programs
on cards and read them in instead of using plugboards; otherwise, the
procedure was the same.
Second Generation (1955-1965)
The introduction of the transistor in the mid-1950s changed the
picture radically. Computers became reliable enough that they could
be manufactured and sold to paying customers with the expectation
that they would continue to function long enough to get some useful
work done. For the first time, there was a clear separation between
designers, builders, operators, programmers, and maintenance
personnel.
These machines, now called mainframes, were locked away in
specially air conditioned computer rooms, with staffs of professional
operators to run them. Only big corporations or major government
agencies or universities could afford the multimillion dollar price tag.
To run a job (i.e., a program or set of programs), a programmer
would first write the program on paper (in FORTRAN or assembler),
then punch it on cards. He would then bring the card deck down to
the input room and hand it to one of the operators and go drink
coffee until the output was ready.
When the computer finished whatever job it was currently running,
an operator would go over to the printer and tear off the output and
carry it over to the output room, so that the programmer could
collect it. This is fast as compared to First Generation.
Third Generation (1965 - 1980)
Third-generation operating systems were well suited for big scientific
calculations and massive commercial data processing runs, but they
were still basically batch systems. Many programmers pined for the
first-generation days when they had the machine all to themselves
for a few hours, so they could debug their programs quickly. With
third-generation systems, the time between submitting a job and
getting back the output was often several hours, so a single
misplaced comma could cause a compilation to fail, and the
programmer to waste half a day.
This desire for quick response time paved the way for timesharing, a
variant of multiprogramming, in which each user has an online
terminal. In a timesharing system, if 20 users are logged in and 17 of
them are thinking or talking or drinking coffee, the CPU can be
allocated in turn to the three jobs that want service. Since people
debugging programs usually issue short commands rather than long
ones, the computer can provide fast, interactive service to a number
of users and perhaps also work on big batch jobs in the background
when the CPU is otherwise idle. The first serious timesharing system,
CTSS (Compile Time Sharing System), was developed at M.I.T. on a
specially modified 7094. However, timesharing did not really become
popular until the necessary protection hardware became widespread
during the third generation.
Fourth Generation (1980 - 1990)
With the development of LSI (Large Scale Integration) circuits, chips
containing thousands of transistors on a square centimetre of silicon,
the age of the personal computer dawned. In terms of architecture,
personal computers (initially called microcomputers) were not all that
different from minicomputers of the PDP-11 class, but in terms of price
they certainly were different. Where the minicomputer made it
possible for a department in a company or university to have its own
computer, the microprocessor chip made it possible for a single
individual to have his or her own personal computer.
Microsoft operating system is Windows NT (NT stands for New
Technology), which is compatible with Windows 95 at a certain level,
but a complete rewrite from scratch internally. It is a full 32-bit system.
The lead designer for Windows NT was David Cutler, who was also one
of the designers of the VAX VMS operating system, so some ideas from
VMS are present in NT. Microsoft expected that the first version of NT
would kill off MS-DOS and all other versions of Windows since it was a
vastly superior system, but it fizzled. Only with Windows NT 4.0 did it
finally catch on in a big way, especially on corporate networks. Version
5 of Windows NT was renamed Windows 2000 in early 1999. That did
not quite work out either, so Microsoft came out with yet another
version of Windows 98 called Windows Me (Millennium edition).
Conclusion
From the seminar & information search we came to observed that the
invention of operating system was one of the biggest invention in the
field of Computer Engineering. It was developed and evolved since
1940 & still going on. It had created a platform for the resources and
provide various functions which made the user life convenient.
From this assignment, we also observed that the development of
operating system is strongly linked with generating of computers. As
the computer get evolved the operating system also setup the market
in the Field. From MS Dos to Windows 10 the development of
Microsoft operating system has been totally changed. Now operating
system having highly professional GUI. Now a days operating system
is also available open source means you can directly make changes as
you required.