Hardware, Software and Data Communications CPU: Control Unit
Hardware, Software and Data Communications CPU: Control Unit
CPU
The CPU is the brain of the computer where data is manipulated. In the average
microcomputer the entire CPU is a single unit called a microprocessor.
Most microprocessors are single chips mounted on a piece of plastic with metal wires
attached to it. Some newer microprocessors include multiple chips and are encased in
their own cover and fit into a special socket on the motherboard.
All the computers resources are managed from the control unit. It is the logical hub
of the computer.
The CPU’s instructions for carrying out commands are built into the control unit. The
instruction set lists all the operations that a CPU can perform and is expressed in
micro code (a series of basic directions telling the CPU how to execute more complex
operations).
Arithmetic à Addition
Subtraction
Multiplication
Division
Logical à Comparisons such as Equality
Greater than
Less than
Some of the logical operations can be done on text, e.g. searching for a word in a
document means that the CPU carries out a rapid succession of equals operations to
match the sequence of ASCII codes making up the search word.
Many instructions carried out by the control unit involve moving data. When the
control unit encounters an instruction involving logic or arithmetic it passes this to the
ALU (arithmetic logic unit)
The ALU includes a group of registers (high speed memory locations built directly
into the CPU). These are used to hold the data currently being processed.
E.g. Control unit loads 2 numbers from memory to registers in the ALU. Control unit
tells ALU to divide the 2 numbers (arithmetic) or compare to see if they are equal
(logic).
Intel Processors
Pre-Pentium Processors
Intel’s first processors were simple by today’s standards but provided a level of
computing never seen before in a single processing chip. A major improvement that
came with the 80386 is called virtual 8086 mode. In this mode a single 80386 chip
could achieve the processing power of 16 separate 8086 chips each running a separate
copy of the operating system. This capability enabled the chip to run different
programs at the same time – a technique known as multitasking. All the chips
following the 386 had this capability.
In 1989 the 80486 released didn’t feature any radically new processor techniques.
Instead it combined a 386 processor with a math coprocessor and a cache memory
controller all on the one chip. These chips now no longer needed to communicate via
the bus so increasing the speed of the system dramatically.
The Pentium
Introduced in 1993. Intel broke with its tradition of numeric names – partly to prevent
other manufactures using similar numeric names. It is still considered part of the
80x86 family. It runs applications approx 5 times faster than the 486 at the same
clock speed. Part of the speed of the Pentium comes from a superscalar architecture
– allows the chip to process more than one instruction in a single clock cycle.
MMX single instruction multiple data (SIMD) process enables one instruction to
perform the same task on multiple pieces of data so reducing the number of loops
required to handle audio, video, animation and graphical data.
The Pentium II
Released in Summer 1997. It has 7.5 million transistors and execution ranges of up to
450MHz. It supports MMX technology and dynamic execution.
Differs from other Pentium models due to the fact that it is encased in a plastic and
metal cartridge rather than the wafer format used for other chips. It needs this casing
because of the single edge connector connection scheme. Instead of plugging into the
regular slot on the motherboard the Pentium II plugs into a special slot called Slot
One which requires a different motherboard. Enclosed within the cartridge is the core
processor and the L2 cache chip.
In 1998 the Pentium II family was expanded with the release of the Celeron and Xeon
which adapted the Pentium II technology for new markets.
The Celeron has many of the features of the Pentium II but runs at slower speeds and
is designed for entry level PC’s in the $1000 range.
Another advantage of the P II is the ability to work with a 100MHz data bus. Prior to
the P II data buses typically ran at 66MHz or less. Improved data bus speeds means
faster overall performance.
Uses a cartridge and slot configuration like the P II and early releases took advantage
of the 100MHz bus. Shortly after its release Intel announced a 133MHz bus
improving the performance further.
The Xeon version of the P III was released in late 1999 and provided faster
performance like its P II version by offering larger L2 cache.
K6 Processors
Was not entirely compatible with Intel processors and initially performed at slower
speeds. AMD continued to improve and began to overtake Intel in some markets.
K6-2 processor released in 1988. Speed range, 300 à 475 MHz and 100MHz data
bus, L2 cache sizes up to 2 MB (compared with P II 512KB). Also features 64-bit
registers and can address 4GB of memory.
K6-III released in 1999. Speeds of 400 à 450 MHz, smaller L2 cache but features a
new L3 cache (up to 2MB) not found in the P III.
K6 feature MMX technology, they do not offer SSE but use AMD’s 3DNow!
Providing enhanced multimedia performance.
Athlon Processor
Released in 1999, the Athlon was the fastest processor available operating at speeds
up to 650 MHz. In March 2000 it was the first PC class processor to achieve speeds
of 1GHz. Designed to work with buses of 200 MHz. Includes 64KB of L1 cache,
512 L2 cache. Capable of addressing 64GB of memory and features 64-bit registers.
Cyrix Processors
The company began as a maker of specialised chips but in mid 1990’s began to
produce processors to rival Intel. Focuses on PC’s that sell for < $1000. 1997 –
Introduced the MediaGX processor, Pentium compatible microprocessor that
integrated audio and graphics functions, operating at speeds of 233 MHZ and higher.
1999 – Cyrix was sold to VIA technologies Inc. which continued the MII line. This
PII class operates at speeds of 433 MHZ and can be found in PC’s from various
manufacturers.
Motorola Processors
680x0 series
Best known as the foundation of the original Macintosh. Actually predates the Mac.
IBM considered using the 68000 in its first PC. The 68000 (released in 1979) was
more powerful than Intel’s 8088 but the improvements were slower. By the time
Motorola released the 68060 chip in 1993, Intel were already promoting the Pentium.
PowerPC series
1991 – IBM and Apple joined forces with Motorola to dethrone Intel from pre-
eminence in PC chip market. Hardware portion focused on the PowerPC chip, first of
which was the 601. Followed soon by the 603, a low power processor suitable for
notebooks. 604 and 604e, high power chips designed for high-end desktops. 620
introduced in 1995 established a new performance record for microprocessors. 750
chip (266 MHz) was released for desktop and mobile computers needing high
performance but low voltage.
The G3 released in 1998 provides even more power. Apple’s IMac and Power Mac
built around the G3 offer better performance and speed than the PII system at a lower
cost.
1999 Apple released the G4, operating speeds of 500MHz and higher, the 128-bit
processor is capable of performing 1 billion floating point operations (1 gigaflop) per
second. Also features 1MB of L2 cache, bus speed of 100MHz.
RISC Processors
Both Motorola 680x0 and Intel 80x86 families are complex instruction set
computing (CISC) processors. Instruction sets are large, typically containing 200 à
300 instructions. Newer theory – if the instruction set is small and simple, each
instruction will execute quicker, thus allowing processor to complete more
instructions during a given period. These type of CPU’s are called reduced
instruction set computing (RISC) processors. RISC design – used in the PowerPC
but first implemented in mid 80’s – results in faster cheaper processor.
Machine Cycles
A CPU executes an instruction by taking a series of steps. The complete series of
steps is called a machine cycle.
The type of processor being used determines the number of steps in a machine cycle.
Although the process is complex the computer can accomplish it incredibly fast. CPU
performance is often measured in millions of instructions per second (MIPS).
Input Devices
Input device à enables you to input information and commands into the computer.
Standard
• Keyboard
• Mouse
Non-standard
Hand
• Pens
• Touch screen
• Game controllers
Optical
• Bar code readers
• Image scanners and optical character recognition
Audio visual
• Microphones and speech recognition
• Video input
• Digital cameras
Output devices
Monitor:
2 basic types of monitors used with pc’s
• CRT Cathode Ray Tube – works in the same way as a TV screen using a large
vacuum tube.
• Flat Panel Display – primarily used with portable computers and are
becoming more popular with desktops.
PC Projectors
More common now to use software to create presentations directly to the screen. A
pc projector plugs into one of the computer’s ports and projects the video output onto
an external surface.
Sound Systems
Speakers and their associated technology are now important output devices.
Printers
2 categories:
1. Impact
2. Non Impact
Impact
Creates an image by pressing an inked ribbon against paper using pins or
hammers to shape the image e.g. typewriter.
Magnetic Disks
Diskette drives and hard disk drives are the most commonly used storage devices in
PCs. Both fall into the magnetic storage category because they record data as
magnetic fields.
Almost all PCs sold today come with a hard disk and one disk drive. Some computers
also feature a third built in magnetic device – a device that uses high capacity floppy
disks.
Tape Drives
Read and write data to the surface of a tape the same way as an audiocassette –
difference is that a computer tape drive writes digital data.
The most popular alternative to magnetic storage systems is optical storage media.
The most widely used type of optical storage medium is the compact disk (CD),
which is used in CD-ROM, DVD-ROM, CDR, CDRW and PhotoCD systems.
DVD-ROM
Digital video (or versatile) disk read only memory, is a high-density medium capable
of storing a full-length movie on a single disk the size of a CD.
CD-R allows you to create your own CD-ROM disks that can be read by any CD-
ROM drive. After the information has been written to the CD it cannot be changed.
Using CD-RW drives the user can write and overwrite data onto CDs. With a CD-
RW data can be revised in the same manner as a floppy disk.
One popular form of recordable CD is PhotoCD, a standard developed by Kodak for
storing digitised photographic images on a CD. Many film-developing stores now
have PhotoCD drives that can store your photos and put them onto a CD.
Classifying Computers
Midrange computers – less powerful, less expensive and smaller than mainframes
but capable of supporting the computer needs of a smaller organisation or managing
networks.
Workstations - are specialised single user computers with many of the features of
PCs but with the processing power of a minicomputer. These powerful machines are
popular amongst scientists, engineers, graphic artists, animators and programmers –
users who need a great deal of processing power. Workstations typically use
advanced processors, more RAM and storage capacities than PCs.
In some firms client/server networks with PCs have actually replaced mainframes.
The process of transferring applications from large computers to smaller ones is called
downsizing. This can potentially reduce computing costs.
In one form of client/server computing, client processing and storage capabilities are
so minimal that the bulk of computer processing occurs on the server. Clients with
minimal memory, storage and processing power which are designed to work on
networks are called network computers (NCs). NCs download software or data
needed from a central computer.
Another form of distributed computing puts processing power back onto users
desktops, linking computers so they can share processing tasks. This model stands in
contrast to the NCs because processing power resides on the individual desktop and
these computers work together without a server or any central controlling authority.
Software
There are 2 major types of software – system software and application software.
Application software describes the programs that are written to apply the computer to
a specific task.
System software co-ordinates the various parts of the computer system and mediates
between application software and computer hardware.
Software engineers use 2 methods to develop multitasking OS. The first requires
cooperation between the OS and application programs. Programs that are currently
running will periodically check the OS to see whether any other programs need the
CPU. If any do, the running program will relinquish control of the CPU to the next
program. This method is called cooperative multitasking and is used to allow
activities such as printing while the user continues to type or use the mouse to input
more data.
The second method is called pre-emptive multitasking. With this method the OS
maintains a list of programs that are running and assigns a priority to each program in
the list. The OS can intervene and modify a programs priority status by rearranging
the priority list. With pre-emptive multitasking the OS can preempt the program that
is running and reassign the time to a higher priority task at any time. Pre-emptive
multitasking thus has the advantage of being able to carry out higher priority
programs faster than lower priority programs.
Virtual storage – handles programs more efficiently because the computer divides
the programs into small fixed of variable length portions storing only a small portion
of the program in primary memory at one time. Only a few statements of a program
actually execute at any one time. This permits a very large number of programs to
reside in primary memory. All other program pages are stored on a peripheral disk
unit until they are ready for execution.
Because they aid the inner workings of the computer systems utilities are grouped
with the OS under the category of system software.
Popular utilities range from programs that can organise or compress the files on a disk
to programs that help you remove programs that you no longer use from your hard
disk. Some of the major categories of utilities include,
• File defragmentation
• Utilities
• Data compression programs
• Backup utilities
• Antivirus computers
• Screen savers.
With an OS you see and interact with a set of items on the screen, the user interface.
In the case of most current OS the user interface looks like a collection of objects on a
coloured background. Most current OS provide a graphical user interface (GUI).
Apple computer introduced the first GUI with its Macintosh computer in 1984. GUIs
are so called because you use a mouse (or other pointing device) to point at graphical
objects on the screen.
Leading PC OS
• Windows XP
(eXPerience) combines reliability and robustness of Windows 98/Me with an
improved GUI. It is meant for powerful new PCs with at least 400MHz of
processing power and 128MB of RAM.
• Windows 98
Genuine 32 bit OS providing a streamlined GUI. Features multi-tasking and
powerful network capabilities. It can support additional hardware
technologies.
• Windows Me
Enhanced version of Windows 98. It has tools for working with photos and
video recordings and tools to simplify networking. It also has a media player
bundled with it.
• Windows 2000
Another 32 bit system with features that make it appropriate for applications in
large networked organisations. Earlier versions of this were Windows NT.
• UNIX
An interactive, multiuser, multitasking OS developed by Bell labs in 1969 to
help scientific researchers share data. UNIX can run on many different types
of computers and can be easily customised. It is considered very powerful but
very complex.
• Linux
A UNIX like OS that is free, reliable, compactly designed and capable of
running on many different hardware platforms. It is an example of open-
source software which provides all computers access to its program code so
that they can modify it to fix errors or make improvements.
• DOS
16 bit OS for older PCs based on the IBM PC standard. Does not support
multi-tasking and limits the size of a program in memory to 640kB.
• Mac OS
OS for the Mac featuring multi-tasking, powerful multimedia and networking
capabilities and a mouse driven GUI.
Application Software
Spreadsheets
Provides computerised versions of traditional modelling tools. Organised into a grid
of columns and rows. The power of the spreadsheet lies in the ability of the program
to recompute all associated values when you choose to change one. Useful for
applications requiring modelling or what-if analysis. Many spreadsheet packages
include graphics functions that can present the data in a variety of charts. The most
popular packages include Microsoft Excel and Lotus 1-2-3.
Presentation graphics
Allows users to create professional quality graphics presentations. This software can
convert numeric data into charts and other types of graphics and can include
multimedia displays of sound, animation, photos and video clips. Microsoft
PowerPoint and Lotus Freelance Graphics are popular packages.
Email software
Used for computer to computer exchange of messages and is an important tool for
communication and collaborative work. A person can use a networked computer to
send notes or longer documents to a recipient. Web browsers and the PC software
suites have email capabilities but specialised email software packages are also
available for use on the Internet.
Web browsers
Easy to use software tools for displaying Web pages and accessing the web and other
Internet resources. Browsers can display or present graphics, audio and video
information as well as traditional text. They have become the primary interface for
accessing the Internet or for using networked systems based on Internet technology.
The two leading commercial web browsers are Microsoft’s Internet Explorer and
Netscape Navigator.
Groupware
Provides functions and services to support the collaborative activities of work groups.
Groupware includes software for group writing and commenting, information sharing,
electronic meetings, scheduling and email and a network to connect the members of
the group as they work. Any member can review the ideas of others and add to them
or individuals can post documents for others to comment on or edit. Leading
commercial software include Lotus notes and Opentexts Livelink.
Data Communications
Communications channels
The means by which data is transmitted from one device to another. A channel can
use different kinds of telecommunications transmission media
• Twisted wire
• Coaxial cable
• Fibre optics
• Wireless transmission
Twisted-Pair Cable
Normally consists of 2 wires individually insulated in plastic and then twisted around
each other and bound together in another layer of plastic.
Except for the plastic nothing shields the wire from outside interference so it is
sometimes called UTP (unshielded twisted-pair). Some wires are encased in a metal
sheath and are therefore called STP (shielded twisted-pair).
This type of wire is also sometimes called telephone wire as it is used for indoor
telephone wiring. Today most twisted-pair wire used for network communication is
made to more demanding specifications than voice grade wire.
Network media are sometimes compared by the amount of data that they can transmit
per sec. The difference between the highest and lowest frequencies of a transmission
channel is known as bandwidth – the higher the bandwidth the more data that can be
transferred at any one time. Networks based on twisted-pairs now support
transmission speeds of up to 1Gbps.
Coaxial Cable
Sometimes called coax is similar to cable used in cable television systems. There are
2 conductors in a coaxial cable – one is a single wire at the centre of the cable and the
other is a wire mesh shield surrounding the first wire with an insulator in between.
It can support transmission speeds up to 10 Mbps and so can carry more data than
older types of twisted-pair wiring. It is more expensive and less popular than the
newly improved twisted-pair technology. 2 types of coaxial cable are used
• Thick – old and seldom used in new networks.
• Thin
Fibre-Optic Cable
A thin strand of glass that transmits pulsating beams of light rather than electric
frequencies. The strand carries the light all the way from one end to the other bending
around corners on the way. Light travels at much greater speeds than electrical
signals à fibre-optic cables can carry data at more than a billion bps. Speeds are now
approaching 100Gbps.
Wireless Links
Wireless communication relies on radio signals or infrared signals for transmitting
data.
4 common uses
1. Office LANs can use radio signals to transmit data between nodes.
2. Laptops can be equipped with cellular phone equipment and a modem.
3. Corporate WANs often use microwave transmission to connect 2 LANs within
the same area. Requires unobstructed line of sight between 2 antennas.
4. WANs that cover large distances often use satellites and microwave
communication.
Network Topologies
The topology – the physical or logical layout of the cables and devices that connect
the nodes of the network. The 3 basic topologies are,
• Bus
• Star
• Ring
A lesser used technology
• Mesh
Each node is connected in series to a single cable, at the cables start and end points a
special device called a terminator is attached. This stops the network signals so they
do not bounce back down the cable.
The disadvantages
• Keeping data from colliding requires extra circuitry and software.
• A broken connection can bring down or crash all or part of the network.
Primary advantage
• Uses the least amount of cabling of any topology.
Some hubs known as intelligent hubs can monitor traffic and help prevent collisions.
A broken connection does not affect the rest of the network. If you lose the hub
however all nodes connected to that hub are unable to communicate.
With this methodology each node examines data sent through the ring. If the data
(known as a token) is not addressed to the node examining it, it passes it along to the
next node in the ring.
There is no danger of collisions because only one packet of data may traverse the ring
at a time. If the ring is broken the entire network is unable to communicate until the
ring is restored.
This topology is impractical for most working environments but is ideal for
connecting routers on the Internet.
It may be helpful to connect different LANs together. To understand how this may be
possible there is a need to understand how networks transmit data and how different
types of networks share data.
On a small network data is broken into small groups called packets before being
transmitted from one computer to another. A packet is a data segment that includes a
header, payload and control elements that are transmitted together. The receiving
computer reconstructs the packet into the original structure.
The payload is the part of the packet that contains the actual data being sent. The
header contains info about the type of data in the payload, the source and destination
of the data and a sequence number so that data from multiple packets can be
reassembled at the receiving computer in the proper order.
Each LAN is governed by a protocol, which is a set of rules and formats for sending
and receiving data and an individual LAN may utilise more than 1 protocol.
If 2 LANS are built around the same communication rules then they can be connected
with one of 2 devices
1. Bridge
A device that looks at the information in each packet header and forwards the
data that is travelling from one LAN to another.
2. Router
More complicated device that stores the routing info for networks. Like a
bridge a router looks at the packet header to determine where the packet
should go and then determines a route for the packet to take and thus reach its
destination.
If you need to create a more sophisticated connection between networks you need a
gateway, a computer system that connects the 2 networks and translates information
from one to the other. Packets from different networks have different types of
information in their headers and the info can be in various formats. The gateway can
take a packet from one type of network, read the header, encapsulate the whole packet
into a new one, adding a header that is understood by the second network.
Geographical distance aside the main difference between a WAN and a LAN is the
cost of transmitting data. In a LAN all components are typically owned by the
organisation that owns them. To transmit data across
great distances a WAN based organisation typically lease many of the components
used for data transmission – such as high speed phone lines or wireless technologies
such as satellite.
Internet Basics
The Beginning
Seeds of the Internet were planted in 1969 when the Advanced Research Projects
Agency (ARPA) of the US department of defence began connecting computers at
different Universities and defence contractors. Goal à To create a large computer
network with multiple paths that could survive a nuclear attack or other disaster.
Soon after the first links in ARPANET were in place engineers and scientists began
exchanging data beyond the scope of the defence department’s original objectives.
The users convinced ARPA that the unofficial uses were helping to test the capacity
of the network.
Initially the network included 4 primary host computers. A host is like a network
server providing services to other computers that connect to it. ARPANET’s host
computers provided file transfer and communication services and gave connected
systems access to the networks high-speed data lines. The system grew quickly and
spread widely as the number of host grew.
It jumped across the Atlantic to Norway and England in 1973 and it never stopped
growing. In the mid 80s another federal agency the National Science Foundation
(NSF) joined the project after the DoD dropped its funding. NSF established 5
supercomputing centres that were available to anyone who wanted to use them for
academic research purposes.
The NSF expected the supercomputers users to use ARPANET to obtain access but
quickly discovered that it could not handle the load. NSF created a new higher
capacity network called NSFnet to compliment the older overloaded ARPANET.
The link between ARPANET, NSFnet and other networks was called the Internet.
NSFnet made Internet connections widely available for academic research but did not
permit users to conduct private business over the system. Therefore several
telecommunications companies built their own network backbones that used the same
protocols as NSFnet. A networks backbone is the central structure that connects
other elements of the network. These private portions of the Internet were not limited
by NSFnets appropriate use restrictions so it became possible to use he Internet to
distribute business and commercial information.
The original ARPANET was shut down in 1990 and government funding for NSFnet
was discontinued in 1995 but the commercial Internet backbone services have easily
replaced them. By the early 90s interest in the Internet began to expand dramatically.
The system that had been created as a tool for surviving nuclear war found its way
into businesses and homes.
Today
Today the Internet connects thousands of networks and more than 100 million users
around the world. It is a huge cooperative community with no central ownership.
This lack of ownership is an important feature of the Internet because it means that no
single person or group controls the network. Although there are several organisations
(such as the Internet society and the www consortium) that propose standards for
Internet related technologies and guide lines for appropriate use, these organisations
almost universally support the Internets openness and lack of central control.
Ad a result the Internet is open to anyone who can access it. If you can use a
computer and it is connected to the Internet you are free not only to use the resources
posted by others but to create resources of your own, that is you can publish
documents on the World Wide Web, exchange email messages and perform many
other tasks.
The single most important fact to understand about the Internet is that it can
potentially link your computer to any other computer. Anyone with access to the
Internet can exchange text, data files and programs with any other user. For all
practical purposes almost everything that happens on the Internet is a variation of one
of these activities. The Internet itself is the pipeline that carries the data between
computers.
The Internet works because every computer connected to it uses the same set of rules
and procedures (protocols) to control timing and data format. The protocols used by
the Internet are called Transmission Control Protocol/Internet Protocol universally
abbreviated as TCP/IP.
These protocols include the specifications that identify individual computers and that
exchange data between computers. They also include rules for several categories of
application programs so programs that run on different kinds of computers can talk to
one another.
TCP/IP software looks different on different types of computers but it always presents
the same appearance to the network. It does not matter if the system at the other end
of a connection is a supercomputer, pocket size PC or anything in between – as long
as it recognises TCP/IP protocols it can send an receive data through the internet.
Most computers are not connected directly to the Internet. Rather they are connected
to smaller networks that connect to the Internet backbone through gateways. This is
why the Internet is sometimes described as a network of networks. The core of the
Internet is the set of backbone connections that tie the local and regional networks
together and the routing scheme that controls the way each piece of data finds its
destination.
The Internet includes many thousands of servers each with its own unique address.
These servers in tandem with routers and bridges do the work of storing and
transferring data across the network.
Because the Internet creates a potential connection between any 2 computers the data
may be forced to take a long circuitous route to reach its destination. Suppose for
example you request data from a server in another area,
1. Request must be broken into packets
2. Packets are routed through your local network and possibly though one or
more subsequent networks to the Internet backbone.
3. After leaving the backbone the packets are routed through one or more
networks until they reach the appropriate server and are reassembled into the
complete request.
4. Once the destination server receives your request it begins sending you the
requested data which winds its way back to you possibly over a different
route.
Between the destination server and you PC the request and data may travel through
several different servers each helping to forward the packets to their final destination.
Internet activity can be defined as computers communicating with each other using
the common language of TCP/IP. Examples are
• Client system communicating with an Internet server.
• Internet server computer communicating with a client computer.
• 2 server computers communicating with each other.
• 2 client computers communicating via one or more servers.
The computer that originates a transaction must identify its intended destination with
a unique address. Every computer on the Internet has a four part numeric address
called the Internet protocol address (IP address) which contains routing
information that identifies its location. Each of the four parts is a number between 0
and 255 so and IP address looks like 194.145.128.14
Computers have no problems working with long strings of numbers but we are not so
skilled! Most computers on the Internet also have an address called a domain name
system (DNS) address, an address that uses words rather than numbers.
The group’s goal was to expand the list of top-level domains to make it easier for
organisations of all kinds to create an Internet domain for themselves. The group
developed the Generic Top-Level Domain Memorandum of Understanding (TLD-
MoU) which spells out proposals for the future management of Internet domains and
proposes 7 new top-level domains for future use
As a business tool it has many uses. Email is an efficient and inexpensive way to
send and receive massages and documents around the world. The www is becoming
an important advertising medium and channel for distribution. Databases and online
information archives are often more up to date than any library. The Internet also has
virtual communities made up of people who share interests.
Most individual users connect the computers modem to the phone line and set up an
account with an Internet Service Provides (ISP) providing local and regional access
to the Internet backbone. Many others connect through a school or business LAN.
The web was created in 1989 at the European Particle Physics Lab in Geneva as a
method for incorporating footnotes figures and cross-references into online hypertext
documents. A hypertext document is a specially encoded file that uses the hypertext
markup language (HTML). This language allows a documents author to embed
hypertext links (called hyperlinks or links) into the document.
As you read a hypertext document (web page) on screen you can click on an encoded
word or picture and immediately jump to another location. A collection of web pages
is called a web site, and these are housed on a web server. Copying a page to the
server is called posting the page (or publishing or uploading).
Popular web sites receive millions of hits (or page views ) per day. Many web
masters measure their sites success by the number of hits in a given timeframe. A
Webmaster is the person or group responsible for designing and maintaining a
website. The terms www and Internet are used interchangeably however the www is
just one part of the Internet.
Mosaic, a point and click web browser was developed at the University of Illinois in
1993. A web browser is a software application designed to find hypertext documents
on the web and open them on the users computer. A web browser displays a web
page as specified by the pages underlying HTML code. The code provides the
browser with
• Fonts and font sizes
• Where and how to display graphics
• If and how to display sound, animation or other special content.
• Location of links and where to go if they are clicked.
• Whether special programming codes, which the browser needs to interpret, are
used in the page.
HTML tags which are enclosed in angle bracket (<>) tell the browser how to display
individual elements on the page. They are placed around the portions of the
document that they affect. Most tags have a starting tag such as <H1> and an ending
tag such as </H1>. A slash indicates a finishing tag.
The internal structure of the World Wide Web is built on a set of rules called hypertext
transfer protocol (HTTP). HTTP uses Internet addresses in a special format called a
uniform resource locator (URL) that look like, type://address/path. Type specifies type
of server on which the URL is located. Address is the address of the server. Path
location within the file structure of the server.
Home Pages
Plugins are used to support several types of content including streaming audio and
streaming video. One of the most commonly used plugin applications is
macromedias Shockwave, enabling web designers to create high quality animation or
video with sound that plays directly within the browser window.
Specialised Web sites called search engines use powerful data searching techniques
to discover the type of content available on the Web. By using a search engine and
specifying your topic of interest you can find the right site of information.
Electronic Mail
If you have an account with an ISP then you can establish an email address. This
unique address allows other users to send messages to you and allows you to send
messages to others. A user can set up an account by specifying a unique user name.
When you send a message you must include a persons username in the address e.g.
[email protected]
When you send email the message is stored on a server until the recipient retrieves it.
This type of server is called a mail server and many use the post office protocol and
are called POP servers.
Listserv systems
One type of mailing list that uses email is an automated list server or listserv. Users
on the list can post their own messages so the result is an ongoing discussion.
News
The Internet supports a form of public bulletin board called news. Many of the most
widely distributed newsgroups are part of a system called Usenet. Users post
articles about the groups topic and as others respond they create a thread of linked
articles. A newsreader program obtains articles from the news server using the
network news transfer protocol (NNTP). To see articles posted on a specific topic
you subscribe to the newsgroup addressing that topic.
This is the Internet tool for using one computer to access a second computer. You can
send commands that run programs and open text or data files. Connecting to a Telnet
host is easy, enter the address and the telnet program establishes a connection.
FTP
File Transfer Protocol is the Internet tool used to copy files from one computer to
another. When a user has accounts on more than one computer FTP can be used to
transfer data or programs between them.
Internet Relay Chat, or just chat is a popular way for Internet users to communicate in
real time with other users. Chat does not require a waiting period between the time
you send a message and the time the other party receives it. IRC is often referred to
as the CB radio of the Internet because it enables few or many people to join a
discussion.
• Direct Connection
Programs run on the local computer which uses TCP/IP protocols to exchange
data with another computer through the Internet. An isolated computer connects
to the Internet through a serial data communications port using SLIP (serial line
interface protocol) or PPP (point to point protocol).
• Remote Terminal Connection
Exchanges data and commands in ASCII format with a host computer that uses
UNIX or similar OS. TCP/IP application programs and protocols all run on the
host. This is known as a shell account as the command set in UNIX is called a
shell.
• Gateway Connection
Even if a LAN does not use TCP/IP commands and protocols it may provide
some Internet services. Such networks use gateways that convert commands and
data to TCP/IP format.
• Connecting Through a LAN
If a LAN uses TCP/IP protocols for communication within the network it is
simple to connect to the Internet through a router, another computer that stores
and forwards data to other computers on the Internet.
• Connecting Through a Modem
If there is no LAN on site a stand alone computer can connect to the internet
through a serial data communications port and a modem using either a shell
account and a terminal emulation or a direct connection with a SLIP (serial line
interface protocol) or PPP (point to point protocol) account.
• High Speed Data Links
Using fibre optics, microwave and other technologies it is entirely practical to
establish an Internet connection that is at least 10 times faster than a modem
connection.
o ISDN service
o xDSL services
o Cable modem service
A firewall is set up to control access to a network by people using the Internet. Firewalls
act as barriers to unauthorised entry into a network that is connected to the Internet,
allowing outsiders access to public access areas but preventing them from exploring
proprietary areas of the network.
A firewall system can be hardware, software or both. It works by inspecting requests and
data that travel between the private network and the Internet. If the request or data does
not pass the firewalls security inspection it is stopped from travelling any further.
An intranet is a LAN or WAN that uses TCP/IP protocols but belongs exclusively to a
corporation, school or organisation. It is accessible only to the organisation’s workers. If
it is connected to the Internet then it is secured by a firewall to prevent unauthorised
access.
An extranet is an intranet that can be accessed by outside users over the Internet. To gain
entrance an external user typically must log on to the network by providing a valid user
ID and password.
• Ownership
Any piece of text or graphic retrieved from the Internet may be covered by
trademark or copyright law making it illegal to use it without the owners consent.
• Libel
If email messages are sent through an employers network then the employer may
become involved if the sender is accused of libel.
• Appropriate use
When using a business network to access the Internet users must be careful to use
network resources appropriately.
Using an e-commerce site is like browsing through an online catalogue. When you are
ready to make your purchases you can pay in several ways,
• One time credit card purchases
Provide your personal and credit card information each time you make a
purchase.
• Set up an online account
If you think you will make more purchases with the online vendor you can set up
an account at the web site. The vendor stores your personal and credit card
information on a secure server and a cookie is placed on your computer disk.
Later when you access your account using a user ID and password the site uses
the information in the cookie to access your account.
• Use electronic Cash
Also called digital cash. Takes the form of a redeemable electronic certificate
which can be purchased from a bank that provides electronic cash services. Not
all e-commerce web sites accept digital cash yet.
• Electronic wallet
Program on your computer that store credit card information, a digital certificate
that verifies your identity and shipping details. Not accepted by all e-commerce
sites.
Security
One way to provides secure websites is to encode pages using secure sockets layer
(SSL) technology which encrypts the data. Another way is to use secure HTTP (S-
HTTP). SSL can be used to encode any amount of data, S-HTTP is used to encode
individual pieces of data.
EDI differs from email in that it transmits an actual structured transaction as opposed
to unstructured text messages such as a letter.
Seller Customer
Purchasing Orders
Payments
Shipping notices
Price updates
Invoices
Organisations can most fully benefit from EDI when they integrate the data supplied
by EDI with applications such as accounts payable, inventory control, shipping and
production planning and when they have carefully planned for the organisational
changes surrounding new business processes. Management support and training in
the new technology are essential. Companies must also standardise the form of the
transactions that they use with other forms and comply with legal requirements for
verifying that the transactions are authentic. Many organisations prefer to use private
networks for EDI transactions but are increasingly turning to the internet for this
purpose.
Intranets provide a rich set of tools for creating collaborative environments where
members of an organisations can exchange ideas, share information and work together
on common projects regardless of their physical location.
Corporate
Intranet
Intranets ca nbe used to simplify and integrate business processes spanning more than
one functional area . These cross functional purposes can be co-ordinated
electronically increasing organisational efficiency and responsivness and can be co-
ordinated wit hthe business process es of other companies. Using Internet technology
all members of the supply chain ca ninstantly communicate with each other using up
to date information to adjust purchasing, logistics, manufacturing, packaging and
schedules.
Table 4-9
Figure 4-9.
Management challenges and opportunities
Managers need to carefully review strategy and business models to determine how to
maximise the benefits of Internet technology. Managers should anticipate making
organisational changes to take advantage of this technology including business
processes, new relationships with the firms value partners and customers and new
business designs. Determining how and where to digitally enable the enterprise with
Internet technology is a key management decision.