Thursday, September 16, 2010

AI

Forward chaining is one of the two main methods of reasoning when using inference rules (in artificial intelligence) and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining.

Forward chaining starts with the available data and uses inference rules to extract more data (from an end user for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When found it can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.

Inference engines will iterate through this process until a goal is reached.

For example, suppose that the goal is to conclude the color of a pet named Fritz, given that he croaks and eats flies, and that the rule base contains the following four rules:

1. If X croaks and eats flies - Then X is a frog
2. If X chirps and sings - Then X is a canary
3. If X is a frog - Then X is green
4. If X is a canary - Then X is yellow

This rule base would be searched and the first rule would be selected, because its antecedent (If Fritz croaks and eats flies) matches our data. Now the consequents (Then X is a frog) is added to the data. The rule base is again searched and this time the third rule is selected, because its antecedent (If Fritz is a frog) matches our data that was just confirmed. Now the new consequent (Then Fritz is green) is added to our data. Nothing more can be inferred from this information, but we have now accomplished our goal of determining the color of Fritz.

Because the data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference. The forward chaining approach is often employed by expert systems, such as CLIPS.

One of the advantages of forward-chaining over backward-chaining is that the reception of new data can trigger new inferences, which makes the engine better suited to dynamic situations in which conditions are likely to change.
Backward chaining (or backward reasoning) is an inference method used in automated theorem provers, proof assistants and other artificial intelligence applications. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule.

Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if there is data available that will support any of these consequents. An inference engine using backward chaining would search the inference rules until it finds one which has a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (in order for one's goal to be confirmed one must also provide data that confirms this new rule).

For example, suppose that the goal is to conclude the color of my pet Fritz, given that he croaks and eats flies, and that the rule base contains the following four rules:

1. If X croaks and eats flies – Then X is a frog
2. If X chirps and sings – Then X is a canary
3. If X is a frog – Then X is green
4. If X is a canary – Then X is yellow

This rule base would be searched and the third and fourth rules would be selected, because their consequents (Then Fritz is green, Then Fritz is yellow) match the goal (to determine Fritz's color). It is not yet known that Fritz is a frog, so both the antecedents (If Fritz is a frog, If Fritz is a canary) are added to the goal list. The rule base is again searched and this time the first two rules are selected, because their consequents (Then X is a frog, Then X is a canary) match the new goals that were just added to the list. The antecedent (If Fritz croaks and eats flies) is known to be true and therefore it can be concluded that Fritz is a frog, and not a canary. The goal of determining Fritz's color is now achieved (Fritz is green if he is a frog, and yellow if he is a canary, but he is a frog since he croaks and eats flies; therefore, Fritz is green).

Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in modus tollens) and even then, their antecedents are then considered as the new goals (and not the conclusions as in affirming the consequent) which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule which is used is modus ponens.

Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems.

networking

INTRODUCTION TO COMPUTER NETWORKING
History of Computer Networking
The concept of computer networking is almost as old as computers themselves, although the practical implementation of that concept was considered improbable at the times when computers were being developed. Earlier the only way known to engineers to transfer data between two computers were some external storage device (at that time this used to be punched cards). The data from the source computer were printed onto the punched card, then that card was to be physically carried to the destination computer, the card was then inserted in the destination computer and data is then read by it. Primitive technology restricted two or more computers from exchanging data and information among each other without human intervention.
At the end of the World War II, two super powers emerged, the heavily computer dependent United States of America, and the less tech-savvy Soviet Union. With newly acquired nuclear weapons USA though it had no competitor. That confidence lasted for about five years when the Soviet Union too exploded their first nuclear device. Very rapidly both the sides began to acquire long range bomber aircraft and intercontinental range missiles that can be launched from virtually anywhere and can strike virtually any point in the opposing countries’ landmass. In short, America found no place to hide its national assets.
At this time the American armed forces had become over reliant on computers. All the three services of the US armed forces (Army, Navy and Air Force) had their individual computer systems of great power and capacity (in 1960’s standards, of course). These systems were used to store information about the deployment details and strength of American army naval and air force units and how to control their movement in case a war breaks out. A paper published by the RAND Corporation, which is a body of highly talented people that recommends the US government of the future policies, described a scenario where the USSR launches a small but accurate pre-emptive nuclear attack using intercontinental missiles on only those towns where the three computer systems of the three services are localized. The paper said that in such a scenario, America would lose the war completely before even firing the first bullet for self defense. The situation is comparable to a very powerful man who has lost all control over the nervous system, and is therefore incapable of both offence and defense, as he cannot move a muscle.
As a response the US Department of Defense (American counterpart to the Ministry of Defense in India) responded with a plan. It asked to build a network of cheaper computers using the existing copper telephone lines. The plan was to divide the database and the control systems from three highly sophisticated computers to many smaller less capable computers that can exchange information without human intervention. The computers need not be powerful and expensive because each of them is responsible for storing or processing a minor part of the entire system. The advantage of such a system is that even if a nuclear strike occurs it may still be unable to eliminate each and every node of the system, and the remaining nodes would be able to communicate among each other to reconstruct the system and direct a retaliatory attack against the enemy. The project was funded by the Department of Defense, and for the technical part it appointed the scientists and engineers mainly the professors of different American universities and a team of bright students handpicked by their professors.
The technical challenge was huge because, as we had told already, none thought that something like this would ever become possible someday. But the team worked and after some initial hiccups, they were able to connect the computers belonging to the different universities using the copper telephone network of American telecommunication industries. The project was proven to be possible and feasible, which resulted in the creation of the project’s name as ARPA (Advanced Research Project Agency) and the resulting networks were named as ARPANet.
Primarily the ARPANet were used for military purpose and a secondary educational purpose. Because the universities collaborated in building the ARPANet, they were allowed to use the bandwidth to interchange research papers etc. when it was seen that huge amount of bandwidth is still remaining unused even after the abovementioned usages, many commercial organizations who have their branches spread out in different corners of the country and have a similar need to communicate the branch computers began to seek permission to use the bandwidth in exchange of money. When this was allowed, many companies began to join the ARPANet. Soon enough, the commercial applications of ARPANet became so huge that the government had to create a separate network for commercial purpose only. This became a full fledged operational network within the boundary of the United States of America.
In the 1980s, almost all developed countries like United States of America had their own commercial network within its own national boundary. It proposals was then made to interconnect the National networks in order to create such a system that any computer in the world can communicate with any other computer in the world, irrespective of any country or any protocol used within these computers.
Networking of any broader kind can be considered as something called a “network of networks”. A simple network is a collection of computers that can share data and/or resource. A network of networks as the name suggests, is a collection of networks that may have different architecture, hardware, software or set of rules. Look at the following diagram to understand what a network of network looks like.















As the diagram suggests, four networks have been connected together and the entire system now becomes a “network of networks”. The device that interconnects two or more networks is called Router (in oval shape). The computer within a network which is responsible for connecting that entire network to the outside world is called a Gateway computer (bordered rectangle). Generally gateways are connected to routers. A network can have multiple gateways. Two networks might not be connected via the same router. In such a case the router themselves may form a network of their own to indirectly connect the networks lying at the endpoints.
Computer Networks satisfy a broad range of purposes and meet various requirements. Some of the common objectives of computer communication networks are:
1. To provide sharing of geographically distant resources such as information, databases or processors (CPUs). Resource sharing is perhaps the most common objective for providing networks, within the constraints of cost and reliability of transmission links.
2. To provide communication among users. Network users geographically far apart from each other can converse in an interactive session or send messages to each other.
3. To increase the reliability of processing capacity through backup and redundancy. If one processing unit breaks down, another physically distant processor can take over.
4. To provide distributed processing capability, which means taking the processing out of a single large computer and distributing it to the places where the data is generated or where most of the operations are done. It is cost – effective as it eliminates most of the expensive large processors and also saves on transmission cost.
5. To provide centralized management and allocation of resources.
6. To enable modular enhancement of computing resources. We can at any time add (or remove) an extra smaller and inexpensive computer and connect it to the network to increase the total computing capacity of the network.
7. Superior price / performance ratio.
Classification of Networks
Local Area Network (LAN)
It is usually privately owned and links the devices in a single office, building or campus shown in the figure below.


Depending on the needs of an organization and the type of technology used, a LAN can be as simple as two PCs and a printer in someone’s home office; or it can extend throughout a company and include audio and video peripherals. LAN size is limited to a few kilometers.
LANs are designed to allow resources to be shared between personal computers or workstations. The resources to be shared can include hardware (e.g., a printer), software (e.g., an application program), or data. In LAN one of the computers may be given a large capacity disk drive and may become a server to the other clients. Software can be stored on this central server and used as needed by the whole group.
In addition, to size, LANs are distinguished from other types of networks by their transmission media and topology. In general, a given LAN will use only one type of transmission medium. The most common LAN topologies are bus, ring and star. LANs have data rates in the 4 to 16 megabits per second (Mbps) range. Today, speeds are increasing and can reach 100 Mbps. Because the distance between two nodes in a LAN is small, the data transfer rate is high.
Metropolitan Area Network (MAN)
A MAN is designed to extend over an entire city. It may be single network such as a cable television network, or it may be a means of connecting a number of LANs into a large network so that resources may be shared LAN-to-LAN as well as device-to-device. A company can use a MAN to connect the LANs in all its offices throughout a city shown in the figure below.







You can see that this diagram is verly similar to the diagram of “network of networks”, because MANs are nothing but that. Think of the MAN as the following: A company may have 4 branches across the city and each branch has a LAN within its building. If the company wishes to interconnect the buildings via a public telephone network providing telecommunication facility across the city, it becomes a MAN. However, a company can go on to make investment on setting up its own private network to connect the LANs. Generally the data transfer rate is lower than that of LAN. Because data is being sent across greater distance, MANs may often suffer from weakening of signal strength and repeaters may have to be used to boost signal midway.
Wide Area Network (WAN)
A WAN provides long distance transmission of data, voice, image and video information over large geographic areas that may comprise a country, a continent or even the whole world. In contrast to LANs, WANs may utilize public, leased or private communication equipment, usually in combinations and can therefore span a huge amount of distance. A WAN that is wholly owned and used by a single company is often referred to as an enterprise network. Because of the extreme distance data has to move, the data transfer rate becomes very low. On the other hand the amount of data to be carried over international or intercontinental distance is huge. To make up for it, WAN links generally use very high speed communication media that can transmit data over a great distance as a high speed without mush need of repeating. The relay satellites and optical fibers are the preferred media for WAN.


Topology Designs
Topology means the arrangement of the computers in a connecting network. There are different types of topologies with their advantages and disadvantages. Below we try to describe some of them briefly.
Bus topology
It uses a single common cable or link (coaxial cable, broadcast radio frequency) to connect the nodes of the network to one another. It is encountered in existing Ethernet environments. Stations connect to the common media through a series of taps located at specified distances from one another along the common cable, but only one station can transmit on to the common medium at any one time. However, they are sensitive to distance and are difficult to troubleshoot.



There can be another kind of bus topology in which a bus is extended from the server and all the client computers are connected to that single bus.




Advantages: 1. Simple construction. 2. Can be upsized / downsized easily. 3. Cheap.
Disadvantages: 1. Nodes more distant to the server gets less speed. 2. If error occurs, it is difficult to diagnose.
Ring topology
The ring topology connects each node to the next one to form a closed loop. Each station has a transmitter and a receiver and data is transmitted in one direction around the ring. It relies well – defined rules of transmission and reception in order to function smoothly. Nodes are connected in a definite series, with information going from one node to the next in a predefined order (i.e. ascending or descending order of address), since each station is expecting transmission from the station before it and sending transmission only to the station following it; it can be made to incorporate automatic fault location and recovery procedures. It can look like the following:




Or





A ring is relatively easy to install and reconfigure. Each device is linked only to its immediate two neighbors. To add or delete a device requires changing only two connections. The only constraints are media and traffic considerations (maximum ring length and number of devices). In addition, fault isolation is simplified. A signal is circulating at all times. If one device does not receive a signal within a specified period, it can issue an alarm. The alarm alerts the network operator to the problem and its location.
Advantages: 1. Simple to configure. 2. Easy to upsize / downsize. 3. Cheap.
Disadvantages: 1. Nodes have the additional duty of forwarding the data to the next node. 2. In case of a node failure or cable failure, part of the network may be lost.
Star topology
The star topology consists of a number of individual nodes, which communicate through a common central point. The common central point is often a concentrator device or hub. Many concentrators incorporate their own troubleshooting and monitoring functions, allowing network managers to determine faulty stations and remove them from the concentrator without disrupting the remaining network. It requires more cable than most other topologies and the entire network can come down if the hub goes bad.




Advantages: 1. All communication among the nodes must have to pass through the central node, so traffic monitoring can be done. 2. If one node fails, all the remaining parts of the network keeps functioning. 3. Fault detection is also very easy.
Disadvantages: 1. Configuring the central node may be difficult for large network, as the number of cable to handle at a single site becomes very huge. 2. If the central node fails, entire network fails.
Mesh topology
In a mesh topology, every device has a dedicated point – to – point link to every other device. The term dedicated means that the link carries traffic only between the two devices it connects. A fully connected mesh network therefore has n (n – 1)/2 physical channels to link n devices.



Advantages: 1. No common channel used, so network load is reduced. 2. If one link fails, there are always alternative routes between the same pair of nodes through which traffic can be routed. 3. Point – to – point links make fault identification and fault isolation easy.
Disadvantages: 1. Very costly. 2. Very complex. 3. Difficult to upsize / downsize.
Tree topology
Here the nodes are connected in a hierarchical way, mainly where we have organizations which are divided in departments and which are further divided in sub departments.





Advantages: 1.The network architecture reflects the structure of a real world organization. 2. Restructuring the network is easy.
Disadvantages: 1. If the root node of a department fails, the entire network would become inaccessible.
Mixed topology
In a real world LAN, especially if it is a large one, we most often see that different part of it has been built with different topologies. This is called mixed topology and it is a very common thing in practice.
NETWORK PROTOCOL
A protocol is a set of rules by which two computers (or even two persons) can communicate between each other. A protocol is essential because otherwise the two computers which may have different hardware or software configuration or operating system or topology may not be able to communicate between themselves effectively. A protocol is nothing but a set of agreed rules between the communicating parties.

You can compare protocol in the way you communicate with your friends or teachers or your parents. If you think how you talk with people around them, you will have an idea of protocol. When you talk to your friends, you talk with a certain tone, body gesture, topic, or language. However, when you talk to your teacher or your parents, you adopt to some different set of tone, body gesture, topic, or language, which do not match the same set of the same that you are comfortable with your friends. Well, that’s exactly what protocols are. If you talk to your father in the same way as you do with your friend, the experience would not be very pleasing to your father and you as well!
Protocols used in computer network are more complex than the example we have discussed so far. Such a protocol mainly performs the following functions:
1) Establishes necessary connections
2) Establishes a standard communication path
3) Establishes a standard data element
For two entities to communicate successfully, they must speak the same language. What is communicated, how it is communicated and when it is communicated must conform to some conventions mutually acceptable to the entities involved. The conventions are referred to as a protocol, which may be defined as a set of rules governing the exchange of data between two entities.

Before we start a discussion about them we must first understand the structure of a protocol. A telecommunication protocol is generally built as a stack of rules.

Consider a situation where a person who speaks Bengali wants to say something to a friend who speaks Hindi over the telephone and nobody understands each other’s language. In that case they should need a pair of interpreters who can talk in a common language. On one side an interpreter would translate the Bengali message to a common global language, say English, and on the other side the other interpreter would translate that English message to Hindi. Now, a stack of protocol would look like the following:









Friends’ Layer


Interpreters’ Layer


Telephone Layer

As the diagram suggests, the friends themselves are talking according to a protocol set by them. However, as they cannot talk to each other directly, they must take help from interpreters, and both the friends have one each. At the sender’s end, interpreter 1 translates the message in Bengali to English, and picks up the telephone to call interpreter 2 who receives the message in English too. The message itself, however, is being transmitted as electromagnetic pulse over the telephone network. When interpreter 2 receives the English message, translates it into Hindi and passes it to friend 2, the message transmission is complete.

A very curious thing is that each layer has a fixed rule of its own. The friends, as already explained must have a protocol of their own. The interpreters too must have some common rule, at least that they must know a common language. At the lowest layer, i.e. the telephone layer, some rules must be followed regarding how to dial a number, how to initiate or disconnect a call, etc. Each of these layer, whom we have given a specific name, therefore has its own set of rules. For this reason, each layer may think that it is solely responsible for the delivery of the message. The friends may think that they are communicating; the interpreters may think that they are passing the message, and the telephones may think that they are transmitting the message in electromagnetic form. In a sense, all of then are true. The thick horizontal arrows shown on the diagram is the logical path of the message, that gives them the feeling that the counterparts on both sides are communicating. However, the real physical path of the message is shown by the thin arrows that collectively make a U shape in the diagram
.
All in all, for a successful delivery of a message, the protocols must both work perfectly both with the protocol of the same layer (horizontally) as well as with the protocols of the other layer (vertically). This makes an end to end message delivery have at least three protocols. In short they can be referred to not as three different protocols, but collectively as a protocol suit.

A network protocol is very similar and rather then calling it a protocol, we must call it protocol suit from now on.

When computer networking was created for the very first time, the developers were working under tight time schedule. They had to build up a working system in hurry and therefore had to devise quick working solutions to make the thing running. The set of protocols which they used is the TCP/IP protocol suit. Because it was devised in a hurry, it was not well structured, but working quite fine. Later on the scientists proposed another set of protocol which was much better structured than TCP/IP, known as the OSI Reference Model. However, at that time TCP/IP was being used so widely that nobody was interested to make the huge investment required for the installation of OSI, only for the sake of a better structure and nothing else. Till today we are using the TCP/IP protocol suit. However, students have to study the OSI Reference Model first because it is well organized and it would help the student understand the layered structure of protocol suit more if OSI Reference Model is learnt before the TCP/IP model.



Open System Interconnect (OSI) Model

ISO (International Standards Organization) has promoted the Open Systems Interconnection (OSI) model. The purpose of this International Standard Reference Model is to provide a common basis for the co – ordination of development of standards for the purpose of systems interconnection. It is also the purpose of this standard to identify areas for developing or improving standards and to provide a common reference for maintaining consistency of all related standards.
OSI has the following features:
1) It allows various computers from different manufacturers to communicate.
2) This model is primarily based on IBM’s System Network Architecture (SNA).
3) It has seven layers. Layering helps to simplify the complexity of the structure and consolidate similar functions.
4) The user sees only the application. The user of the network need not be aware of the details of functioning of the various lower layers.
5) Each layer uses the service of the layer immediately below.
6) The data units crossing each interface between the layers depends on the implementation of layers.
7) The data units at each protocol level are standardized.










The following is a diagram of the OSI Reference Model protocol suit
Application Layer Protocol

Presentation Layer Protocol

Session Layer Protocol

Transport Layer Protocol

Network Layer Protocol

Data Link Layer Protocol

Physical Layer Protocol


The above diagram is comparable to the earlier protocol suit example diagram. OSI has seven layers, whose names have\been given. Each layer has a protocol of its own. Below we describe the activity of each protocol.
The three lowermost layer protocols are called the Host-to-Router protocols. This is because the lower layers are not sophisticated and cannot control the delivery of a data item from the host computer to the nearest router, or fetching a data item from the nearest router to the host computer.
On the other hand, the upper four layers are so sophisticated that they can abstract the underlying routers and can manage the delivery of a message from the source computer to the destination computer. For this reason these protocols are called Host-to-Host protocols.

1) Physical Layer
• Raw bit transfer
• Allocation of pins
• Establishing connections
• Bit error control
• Electrical and mechanical specifications for interface, or example determining the voltage to represent a 0 or 1.
2) Data Link Layer
• Data link connection activation and deactivation
• Mapping data units provided from the network layer into data link protocol units for transmission error, the raw bits are packed into data frames.
• Sending the data frames sequentially to the receiver and returns and acknowledgement frame
• Error detection, recovery and notification
• Identification and parameter exchange with data link parties
• Adaptation of speed between sender and receiver, by making the sender regulate its own speed so as not to overflow the receiver’s buffer
3) Network Layer
Lower level
• Network addressing and end – point identification
• Multiplexing network connection onto data link
• Segmenting and/ or blocking to facilitate data transfer
• Service selection when different services are available
Higher level
• Error detection and data recovery
• Error notification
• Sequenced delivery of data
• Flow control
• Termination services when requested by a using party
• Congestion control during heavy network traffic
• Counting the number of data packets downloaded or uploaded so as to charge the user
4) Transport Layer
Establishment phase
• Selection of network service as a function of parameters such as transit delay, setup delay and error characteristic
• Management of transport connection to lower level connections
• Establishment of data unit size
• Selection of functions used during data transfer
• Transport of data from Session layer, break them up into smaller units and transfer them to the Network layer (at the sender’s site)
• performing the reverse operation at the receiver’s site
Data transfer phase
• Blocking
• Concatenation
• Segmenting
• Multiplexing of connections provided by lower levels
• Flow control in a session oriented, end – to – end sense
• Maintenance of connection identity between the two transport functions
• Error detection for lost, damaged, duplicated, disordered or undelivered data units
• Data recovery from address, problems detected
Termination phase
• Notification of termination reason
• Identification of connection terminated
This is a true end to end layer protocol. Unlike the previous three layers which are host to router protocol, a program on the source machine can interact directly with a program in the destination machine if they are in the transport layer.
5) Session Layer
• Session connection establishment. A session is an agreed connection between two computers by which they can log into each other, chat, share information, transfer files and then log out.
• Session connection release
• Normal data exchange
• Exception reporting
• Management of the session
• Address translation
• Allows token exchange between different machines so that they bore know that they are not trying the same operation at the same time.
• Allows synchronization of processes so that if a process fails, only that amount of the data lost after the most recent failure needs to be retransmitted rather than the entire data
6) Presentation Layer
• Session initiation and termination requests
• Data transformation
• Data formatting
• Syntax selection
• Encryption to ensure security
7) Application Layer
• Establishment of authority to communicate
• Determination of cost – allocation methodology
• Synchronization of applications
• Establishment of data recovery responsibility
• Data validity
• Information and file transfer
• OSI MODEL:

User oriented Application
Users of transport service
Presentation
End to end
Connection oriented Session
Transport
Network
Network service
Point to point link oriented Data link
Physical



TCP/IP Reference Model
As mentioned earlier, this was the first protocol suit used in networking. It has the following four layers.

Telnet, FTP, SMTP, DNS etc.

TCP, UDP etc.

IP

LAN, ARPA, Satnet, Packet Radio etc.

1) Host to Network Layer
The issues in this layer are the sum of the physical and data link layer of the OSI reference model.

2) Internet Layer
• Allows the host to inject packets into any available connectionless networks and make them travel independently (often not in order) to the destination
• For the above-mentioned purpose, the network layer has defined an official packet format and protocol known as the Internet protocol. The job of this layer is to correctly route and deliver the IP packets.
3) Transport Layer
• Allows programs of source and destination machines to interact
• For the above-mentioned purpose the transport layer has defined two end to end connection oriented protocols - Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

TCP: Delivers a byte stream error free from a source to a destination by fragmenting the stream into discreet messages and pass them into the Internet layer. At the destination, TCP receives and reassembles the messages to build up the original stream.

UDP: This is a connectionless protocol for very short messages like client-server request-reply that do not need TCP type sophistication.

4) Application Layer

• Defines all the higher level protocols to run user application program over the network.
TELNET: This is a protocol that enables a user to enter a command in the local machine that will be run in the remote machine.

FTP: To transfer large files from one machine to another.

SMTP: Simple Mail Transfer Protocol to manage e-mails
DNS: Mapping host name into the network address.
HTTP: For fetching pages written in HTML from WWW.

Comparison between OSI and TCP/IP Reference Model

Similarities
Both are based on the concept of a stack of independent protocols. The functions of these layers are roughly similar.

Differences
• For OSI, the model came first and protocols were formulated later, but for TCP/IP, the protocol came first end the model is just a description of the protocol.
• Protocols in OSI model are better hidden than TCP/IP model and thus can be replaced relatively easily as the technology changes. But TCP/IP is more protocol oriented than OSI.
• TCP/IP model was devised before the protocols were invented. The designer had little experience about protocols and would not put the right protocol in the right layer. But being a later model, protocols in OSI fit perfectly in the model, although the model itself remained a little less well define.
• OSI model has seven layers while TCP/IP has just four.
• Although both models supports connectionless and connection oriented services, OSI supports only connection oriented service where it matter - at the transport Layer. But TCP/IP provides both types of services in the transport layer.

DATA, SIGNAL AND TRANSMISSION METHODS
What is Data?
Data is any bit or collection of bits that has any meaning. The job of any network is basically to carry data from a source machine to a destination machine.
What is Signal?
When data is transmitted, it becomes signal. Signal is data in movement from one machine to another. Data is sent in the form of changing electrical signals over the transmission medium. There are two types of signals available in nature – analog and digital.
Analog Signals
These are signals which are continuous in nature. They have continuous (non-discrete) amplitude levels as shown in the figure below. Due to the non-discrete nature of amplitudes, it is very difficult to physically remove any noise and distortion which gets added during transmission or by other means. Because of their inherent nature, analog signals are prone to noise and distortion errors.

The above picture shows how analog signal moves through a medium (in this case in the form of a sine wave). This sine wave can be mathematically represented as
s(t) = A sin(2πft + Φ)
Where s is instantaneous amplitude, which is being calculated
A is the peak amplitude: absolute value of the highest intensity, proportional to the energy that the signal carries.
f is the frequency: refers to the number of periods in one second, which is represented in hertz (Hz). (Period refers to the amount of time, in seconds, that a signal needs to complete one cycle.) Frequency can also be defined as the rate of change in the analog signal.
And finally, Φ is the phase: the position of the waveform relative to time zero.
Composite Signal and Fourier Analysis
If we send a simple sign wave through the medium problems may arise. A single sign wave would simply convey no significance at all. It is suitable for neither voice communication, nor data communication. We need special modulation to alter the sine wave so that it can convey 0’s and 1’s with their true meaning. In order to do this, the amplitude of the sign wave may have to be altered. For example, the maximum amplitude would mean a 0 and the minimum amplitude would mean 1. However, with these modifications done, the signal is not a simple sine wave anymore. It would now be called a composite signal that can be made out of many different sine waves.
The French mathematician Fourier showed that any composite signal is a sum of a set of different frequencies, phases and amplitudes. In other words the formula for a composite signal can be written as
s(t) = A1 sin(2πf1t + Φ1) + A2 sin(2πf2t + Φ2) + A3 sin(2πf3t + Φ3) + ….
1, 2, 3 etc are the number of the different sine waves that form the composite signal.
Let us consider the following square wave transmission, which is a composite signal. The peak amplitude of this wave is A and it has a frequency of f, with the period as T. According to Fourier analysis, we can prove that this signal can be decomposed into a series of sine waves like the following






In other words, we have a series of sine waves with frequencies f, 3f, 5f, 7f…. and amplitudes 4A/π, 4A/3π, 4A/5π, 4A/7π and so on. The term with the first frequency, i.e. f is called the fundamental frequency, while the term with frequency 3f is called the third harmonic; the term with frequency 5f is called the fifth harmonic etc. If the square wave has a frequency of 5000 Hz, then its components would have frequencies 5000, 15000, 25000 and so on (in odd multiples).
The description of a signal using the frequency domain and containing all its components is called the frequency spectrum of that signal.
The range of frequencies that a transmission medium can pass is called the bandwidth of that medium. Because all medium absorb some amount of the power of the signal, bandwidth generally refers to the range of frequency that a medium can pass without losing half the power contained in that signal. For example, if a signal can pass signals between the range of 1000 Hz and 5000 Hz by preserving at least half of the signal strength, then the bandwidth of that medium is (5000 – 1000) = 4000 Hz.
It is important that the bandwidth of a medium should cover the entire frequency spectrum of the signal that it carries. Otherwise the minimum and the maximum frequencies of a composite signal could be lost.
Digital Signals
These are signals with well-defined, discrete amplitude level as can be seen in the figure below. The number of these discrete levels depends upon the definition of a specific signal. When we deal with computer systems, the number of these discrete levels is usually two, (one for 0, another for 1) though it may vary. It is easier to eliminate noise from digital signals than from analog signals.





In the above diagram we have shown a digital signal that uses a positive voltage to represent ,1 and zero voltage or no voltage to represent 0.
The additional parameters needed for digital signal propagation are defined below.
Bit Interval: the time required to send a single bit (0 or 1).
Bit Rate: number of bit intervals per second. This means that bit rate is the number of bits sent in 1 second. Normally bit rate is expressed with the unit bits per second or bps. Bit interval is the inverse of bit rate.
Baud Rate: refers to the number of signal units per second that are required to represent the bit rate. The baud rate is less than or equal to the bit rate.
Example 1: An analog signal carries 4 bits in each signal unit. If 1000 signal units are sent per second, find the baud rate and the bit rate.
Solution: Baud rate = number of signal units per second = 1000 bauds/s
Bit rate = baud rate X number of bits per second unit = 1000 X 4 = 4000 bps.
Example 2: The bit rate of a signal is 3000; if each signal unit carries 6 bits, what is the baud rate?
Solution: Baud rate = bit rate / number of bits per signal unit = 3000 / 6 = 500 baud / second.

Propagation Time: it measures the time required for a signal to travel from one point of the transmission medium to another.
Propagation time = Distance/ Propagation speed
Wavelength: it binds the period or the frequency of a simple sine wave to the propagation speed of the medium. The wavelength is the distance a simple signal can travel in one period.
Wavelength = Propagation speed X Period
Pulse rate: it defines the number of pulses per second. A pulse is the minimum amount of time required to transmit a symbol. The bit rate defines the number of bits per second. If a pulse carries only 1 bit, the pulse and the bit rate are the same. If the pulse carries more than 1 bit, then the bit rate is greater than the pulse rate.
Bit Rate = Pulse Rate X log2 L
Relationship of Digital Signal and Composite Analog Signal
We have to often convert a digital signal to a composite analog signal or vice versa. Digital signal can be considered as a composite signal having an infinite number of frequencies. In other words, the bandwidth of a digital signal is infinite.
The next question therefore is that how we can transmit digital signal through a limited bandwidth medium, because all medium has bandwidth limitation. This is definitely possible, as we can send or receive digital data on the Internet using the analog telephone line that has bandwidth limitation. We have to find a relationship between the number of bits that a digital device needs to send and the minimum bandwidth offered by the medium over which the transmission would occur. This relationship is expressed by the Nyquist theorem and the Shannon capacity, which we shall discuss now.


Data Rate Limits
Data rate depends on three factors:
1) The bandwidth available
2) The levels of signals we can use
3) The quality of the channel (the level of the noise).

Two theoretical formulas have been developed to calculate the data rate: for noiseless channel the Nyquist Bit Rate Theorem and for noisy channel the Shannon Capacity Theorem.
Nyquist Bit Rate for Noiseless Channel
For a noiseless channel, the ‘Nyquist bit rate’ defines the theoretical maximum data rate as
Bit Rate = 2 X Bandwidth X log2 L
Where L = number of signal levels used to represent data.
Shannon Capacity for Noisy Channel
In reality, we can’t have a noiseless channel. To determine the theoretical highest data rate for a noisy channel:
Capacity = Bandwidth X log2 (1 + SNR)
Where SNR = Signal to Noise Ratio and Capacity = Capacity of the channel in bps.


Data Transmission Modes
The modes of data transfer can be of two types – Serial and Parallel.
Serial Transmission
When data is transmitted serially, i.e., one bit at a time, it is called serial transmission. Only one wire is used for the data transmission in any particular direction. Data bits are transmitted one after another. For example, if we have to transmit a byte (8 bits) of information from one place to another, each bit will be transmitted one after another and hence will take 8 unit of time.


Serial transmission can again be divided into two categories, Asynchronous and Synchronous.
Asynchronous Serial Transmission
It is so named because the timing of a signal is unimportant. Instead, information is received and translated by agreed-upon patterns. As long as those patterns are followed, the receiving device can retrieve the information without regard to the rhythm in which it is sent. Patterns are based on grouping the bit stream into bytes. Each group, usually 8 bits, is sent along the link as a unit. The sending system handles each group independently, relaying it to the link whenever ready, without regard to a timer.
Without synchronization, the receiver can’t use timing to predict when the next group will arrive. To alert the receiver to the arrival of a new group, therefore, an extra bit is added to the beginning of each byte. This bit, usually a 0, is called the start bit. To let the receiver know that the byte is finished, 1 or more additional bits are appended to the end of the byte. These bits, usually 1s, are called stop bits. In addition, the transmission of each byte may then be followed by a gap of varying duration. This gap can be represented either by an idle channel or by a stream of additional stop bits.
Synchronous Serial Transmission
The bit stream is combined into longer frames, which may contain multiple bytes. Each byte, however, is introduced onto the transmission link without a gap between it and the next one. It is left to the receiver to separate the bit stream into bytes for decoding purposes. Data are transmitted as an unbroken string of 1s and 0s and the receiver separates that string into the bytes or characters, it needs to reconstruct the information.

Parallel Transmission
In parallel transmission, several bits are transmitted simultaneously over multiple transmission lines. The number of bits which are transmitted simultaneously depends on the design of the system. If we are dealing with a computer system which uses ASCII code (7 bit code), we need to design a transmission link with eight separate wires D1 to D8 as shown in the figure below, each wire carrying one bit. Additional wires are required for control signals (clock / strobe).


Difference between Serial and Parallel Transmission
a) Serial mode is less costly as it requires only one wire for data as compared to many wires in parallel mode.
b) Parallel transmission speed is much higher than serial transmission as it transmits ‘n’ bits at a time compared to 1 bit in serial mode.
c) Parallel transmission throughput is higher compared to serial transmission.
d) Due to cost and other factors, serial transmission is used for long distance communication systems whereas parallel transmission is restricted to short distance communication.

Simplex, Half duplex and Full duplex Modes of Transmission
In simplex mode, data is transmitted in one direction only. This means one end will always be a transmitter and the other end will always be a receiver, e.g. radio, television transmission.


In half duplex mode data is permitted to flow in either direction, but not simultaneously. At a given time, the transmission can take place only in one direction, e.g. walkie–talkie.

OR

In full duplex mode data can be transmitted in both directions simultaneously. e.g. telephone.

Full duplex mode can be created by putting two opposite directional simplex channels together.


Transmission Impairment
Signals travel through transmission media, which are not perfect. The imperfections cause impairment in the signal. This means that the signal at the beginning and end of the medium are not the same. Three types of impairment usually occur: attenuation, distortion and noise.
Attenuation: It means loss of energy. When a signal, simple or composite, travels through a medium, it loses some of its energy so that it can overcome the resistance of the medium. Some of the electrical energy in the signal is converted to heat. To compensate for this loss, amplifiers are used to amplify the signal.
Decibel: To show that a signal has lost or gained strength, engineers use the concept of the decibel. The decibel (dB) measures the relative strengths of two signals or a signal at two different points. Note that the decibel is negative if a signal is attenuated and positive if a signal is amplified.
dB = 10 log10 (P2/P1)
Where P1 and P2 are the powers of a signal at points 1 and 2, respectively.
Distortion: It means that the signal changes its form or shape. It occurs in a composite signal, made of different frequencies. Each signal has its own propagation speed through a medium and therefore, its own delay in arriving at the final destination.
Noise: It is another problem. Several types of noise such as thermal, induced, crosstalk and impulse noise may corrupt the signal. Thermal noise is the random motion of electrons in a wire which creates an extra signal not originally sent by the transmitter. Induced noise comes from sources such as motors and appliances. These devices act as a sending antenna and the transmission medium acts as the receiving antenna. Crosstalk is the effect of one wire on the other. One wire acts as a sending antenna and the other as the receiving antenna. Impulse noise is a spike (a signal with high energy in a very short time) that come from power lines.


Transmission Media
Transmission media can be divided into two broad categories: guided and unguided. Guided media include twisted pair cable, coaxial cable and fiber optic cable. Unguided medium is usually air.
Guided Media
Guided media include any kind of cable that provides a connection from one device to another.
Twisted Pair Cable
It consists of two conductors, each with its own plastic insulation, twisted together. One of the wires is used to carry signals to the receiver and the other is used only as a ground reference. In addition to the signal sent by the sender on one of the wires, noise and crosstalk may affect both wires and create unwanted signals. By twisting the pairs, a balance is maintained. Twisting makes it probable that both wires are equally affected by external noise. This means that the receiver, which calculates the difference between the two, receives no unwanted signals.
The following are the characteristics of twisted pair cable:
1) Used in STAR topology 2) HUB is used 3) 4 pair of wires 4) 2 pairs are used (one pair for read/ one pair for write) 5) 2 pairs are reserved 6) RJ 45 connector is used 7) EMI is high 8) Attenuation 100 m 9) Cost is low 10) Reliable 11) The distance between 2 machines is 200 m

The most common twisted pair cable used in communication is referred to as unshielded twisted pair (UTP) and shielded twisted pair (STP). STP cable has a metal foil covering that encases each pair of insulated conductors. The most common UTP connector is RJ45 (RJ stands for Registered Jack). Twisted pair cables are used in telephone lines to provide voice and data channels. The local loop – the line that connects subscribers to the central telephone office – is most commonly UTP cables. The digital subscriber lines (DSL) are used by the telephone companies to provide high data rate connections also use the high bandwidth capability of UTP cables. Local are networks, such as 10Base – T and 100Base – T, also use twisted pair cables.
Coaxial Cable
It carries signals of higher frequency ranges. Coaxial has a central core conductor of solid wire (Cu) enclosed in an insulating sheath, which is, in turn, encased in an outer conductor of metal foil or a combination of the two. The outer conductor is also enclosed in an insulating sheath and the whole cable is protected by a plastic cover. Coaxial cables are categorized by their radio government (RG) ratings. Each RG number denotes a unique set of physical specifications, including the wire gauge of the inner conductor, the thickness and type of the inner insulator, the construction of the shield and the size and type of the outer casing.
Category Impedance Use
RG – 59 75  Cable TV
RG – 58 50  Thin Ethernet
RG – 11 50  Thick Ethernet

Coaxial cables can be of two types – thin coaxial cable and thick coaxial cable. They can be used for baseband and broadband applications.
Fiber Optic Cable
A fiber optic cable is made of glass or plastic and transmits signals in the form of light. Optical fibers use total internal reflection to guide light through a channel. A glass or plastic core is surrounded by a cladding of less dense glass or plastic.
On the basis of mode of operation, it can be of two types - Single Mode and Multimode Fiber. On the basis of physical construction, it can be of two types – Step Index Fiber and Graded Index Fiber.
Fiber optic can be used in networks and telecommunication because it’s wide bandwidth and cost effectiveness. Fiber optic provides great speed and very little data corruption. Today, with WDM, we can transfer data at a rate of 1600 Gbps. Some cable TV companies use a combination of optical fiber and coaxial cable, thus creating a hybrid network.


Unguided Media
It transports electromagnetic waves without using a physical conductor. This type of communication is often referred to as wireless communication. Signals are normally broadcast through air and thus are available to anyone who has a device capable of receiving them. .
Unguided signals can travel from the source to destination in several ways. There is ground propagation, sky propagation and line-of-sight propagation.
In ground propagation, radio waves travel through the lowest portion of the atmosphere, hugging the earth. These low frequency signals in all directions from the transmitting antenna and follow the curvature of the planet. Distance depends on the amount of power in the signal: the greater the power, the greater the distance.
In sky propagation, higher frequency radio waves radiate upward into the ionosphere where they are reflected back to earth. This type of transmission allows for greater distance with lower power output.
In line-of-sight propagation, very high frequency signals are transmitted in straight lines directly from antenna to antenna. Antennas must be directional, facing each other and either tall enough or close enough together not to be affected by the curvature of the earth.
We can divide wireless transmission into three broad groups: radio waves, microwaves and infrared waves.
Radio Waves
Electromagnetic waves ranging in frequencies between 3 KHz and 1 GHz are normally called radio waves; waves ranging in frequencies between 1 and 300 GHz are called microwaves.
Radio wave is omni-directional. When an antenna transmits radio waves, they are propagated in all directions. A sending antenna can send waves that can be received by any receiving antenna. The radio waves transmitted by one antenna are susceptible to interference by another antenna that may send signals using the same frequency or band.
Radio waves, particularly those waves that propagate in the sky mode, can travel long distances. This makes radio waves a good candidate for long distance broadcasting such as AM radio. Radio waves, particularly those of low and medium frequencies, can penetrate walls. It is useful for multicasting, in which there is one sender but many receivers. AM and FM radio, television, cordless phones and paging are examples of multicasting.
Microwave
Electromagnetic waves having frequencies between 1 and 300 GHz are called microwaves. These are unidirectional. When an antenna transmits microwave waves, they can be narrowly focused. This means that the sending and receiving antennas need to be aligned. The uni-directional property has an obvious advantage. A pair of antennas can be aligned without interfering with another pair of aligned antennas.
Microwave propagation is line – of – sight. Since the towers with the mounted antennas need to be in direct sight of each other, towers that are far apart need to be very tall. The curvature of the earth as well as other blocking obstacles does not allow two short towers to communicate using microwaves. Repeaters are often needed for long distance communication.
Very high frequency microwaves cannot penetrate walls. These are very useful when unicasting (one – to – one) communication is needed between the sender and the receiver. They are used in cellular phones, satellite networks and wireless LANs.
Infrared
Infrared signals, with frequencies from 300 GHz to 400 THz (wavelengths from 1 mm to 770 nm), can be used for short range communication. Infrared signals, having high frequencies, cannot penetrate walls. When we use our infrared remote control, we do not interfere with the use of the remote by our neighbors. We cannot use infrared waves outside a building because the sun’s rays contain infrared waves that can interfere with the communication.
The infrared band, almost 400 THz, has an excellent potential for data transmission. Such a wide bandwidth can be used to transmit digital data with a very high data rate. The Infrared Data Association (IrDA), an association for sponsoring the use of infrared waves, has established standards for using these signals for communication between devices such as keyboards, mice, PCs and printers. For example, some manufacturers provide a special port called the IrDA port that allows a wireless keyboard to communicate with a PC. The standard originally defined a data rate of 75 Kbps for a distance up to 8 m. The recent standard defines a data rate of 4 Mbps.

DS PROGRAMME

ARRAY
#include
#define SIZE 5

void main(void)
{
int arr[SIZE];
int i;

clrscr();
for(i = 0 ; i < SIZE ; i++)
{
printf("\n\tEnter into index %d : ", i);
scanf("%d", &arr[i]);
}

printf("\n\tYou have entered : ");
for(i = 0 ; i < SIZE ; i++)
printf("\t%d", arr[i]);

getch();
}
#include
#define ROW 3
#define COL 4

void main(void)
{
int arr[ROW][COL];
int r, c;

clrscr();

for(r = 0 ; r < ROW ; r++)
{
printf("\n\tEnter for row index %d : ", r);
for(c = 0 ; c < COL ; c++)
{
printf("\n\t\tColumn index %d : ", c);
scanf("%d", &arr[r][c]);
}
}

printf("\n\tYou have entered : ");

for(r = 0 ; r < ROW ; r++)
{
printf("\n");
for(c = 0 ; c < COL ; c++)
printf("\t%d", arr[r][c]);
}

getch();
}
#include
#define SIZE 5

void main(void)
{
int arr[SIZE][SIZE];
int r, c, i;
int r1 = 0, r2 = 0, r3 = 0;
int n;

clrscr();

for(r = 0 ; r < SIZE ; r++)
{
for(c = 0 ; c < SIZE ; c++)
arr[r][c] = 0;
}

for(i = 1 ; i <= SIZE*SIZE ; i++)
{
printf("\n\tEnter number : ");
scanf("%d", &n);
if(n < 100)
arr[r1++][0] = n;
else if(n >= 100 && n < 200)
arr[r2++][1] = n;
else if(n >= 200 && n < 300)
arr[r3++][2] = n;
}

printf("\n\tThe values you entered are : ");

for(r = 0 ; r < SIZE ; r++)
{
printf("\n\t");
for(c = 0 ; c < SIZE ; c++)
printf("\t%d", arr[r][c]);
}

getch();
}




#include
#define SIZE 5

/*
THIS PROGRAM DEMOSTRATES HOW TO PASS AN ARRAY TO A FUNCTION
*/
void main(void)
{
int arr[SIZE];
int i, s;
int sum(int []);

clrscr();
for(i = 0 ; i < SIZE ; i++)
{
printf("\n\tEnter into index %d : ", i);
scanf("%d", &arr[i]);
}

printf("\n\tYou have entered : ");
for(i = 0 ; i < SIZE ; i++)
printf("\t%d", arr[i]);

s = sum(arr);



printf("\n\tThe sum of these numbers is : %d", s);
getch();
}

int sum(int arr[])
{
int s = 0, i;

for(i = 0 ; i < SIZE ; i++)
s = s + arr[i];

return s;
}


#include
#include
#define MAX 5

/*
IMPLEMENTATION OF LINEAR LIST USING AN ARRAY
*/

int list[MAX];

int head = -1; // THIS ACTS LIKE THE NULL VALUE
int tail = -1;

int isFull(void)
{
if(tail == MAX - 1)
return 1;
else
return 0;
}

int isEmpty(void)
{
if(head == -1)
return 1;
else
return 0;
}

void InsertAtEnd(void)
{
int info;
printf("\n\tEnter number : ");
scanf("%d", &info);
fflush(stdin);
if(isFull())
{
printf("\n\tCannot insert the new value. No space is available.");
getch();
return;
}
else
{
if(head == -1) // IF THIS IS THE FIRST ELEMENT
head++;
tail++;
list[tail] = info;
}
}
/*
ALGORITHM:
START
IF THE LIST IS FULL THEN
PRINT "CANNOT INSERT THE NEW VALUE."
RETURN
ELSE
IF IT IS THE FIRST ELEMENT THEN
INCREMENT HEAD
ENDIF
INCREMENT TAIL
ENDIF
SET LIST[TAIL] TO NEW VALUE
END
*/

void InsertAtBeginning(void)
{
int info;
int i;
printf("\n\tEnter number : ");
scanf("%d", &info);
fflush(stdin);
if(isFull())
{
printf("\n\tCannot insert the new value. No space is available.");
getch();
return;
}
else
{
if(head == -1) // IF THIS IS THE FIRST ELEMENT
{ head++;
tail++;
}
else // IF NOT, WE HAVE TO PUSH THE ENTIRE ARRAY ONE PLACE TO THE RIGHT TO ALLOCATE THE NEW NODE AT THE BEGINNING
{
for(i = tail ; i >= head ; i--)
list[i+1] = list[i];
tail++;
}
list[head] = info;
}
}
/*
ALGORITHM:
START
IF THE LIST IS FULL THEN
PRINT "CANNOT INSERT THE NEW VALUE."
RETURN
ELSE
IF IT IS THE FIRST ELEMENT THEN
INCREMENT HEAD
INCREMENT TAIL
ELSE
FOR I= TAIL TO HEAD STEP -1
SET LIST[I+1] = LIST[I]
ENDFOR
INCREMENT TAIL
ENDIF
ENDIF
SET LIST[HEAD] TO NEW VALUE
STOP
*/

void Display(void)
{
int i;
if(isEmpty())
{
printf("\n\tCannot display the list. The list is empty.");
getch();
return;
}
else
{
for(i = head ; i <= tail ; i++)
printf("\t%d", list[i]);
}
}
/*
ALGORITHM:
START
IF THE LIST IS FULL THEN
PRINT "CANNOT DISPLAY THE LIST."
RETURN
ELSE
FOR I= HEAD TO TAIL STEP 1
PRINT LIST[I]
ENDFOR
ENDIF
STOP
*/

int Search(int info)
{
int i;
if(isEmpty())
return -1;
else
{
for(i = head ; i <= tail ; i++)
{
if(list[i] == info)
return i;
}
}
return -1;
}
/*
ALGORITHM:
START
READ THE VALUE TO BE SEARCHED INTO VARIABLE 'INFO'
IF THE LIST IS FULL THEN
PRINT "CANNOT SEARCH THE LIST."
RETURN
ELSE
FOR I= HEAD TO TAIL STEP 1
IF LIST[I] = INFO THEN
PRINT "THE VALUE IS FOUND AT INDEX ",I
RETURN
ENDIF
ENDFOR
ENDIF
PRINT "THE VALUE IS NOT FOUND."
STOP
*/

void InsertAfterAGivenNode(void)
{
int i, j;
int info;
if(isFull())
{
printf("\n\tCannot insert the new value. No space is available.");
getch();
return;
}
printf("\n\tEnter the node after which you want to insert the new node : ");
scanf("%d", &info);
i = Search(info);
if(i == -1)
{
printf("\n\tThis value does not exist in the list.");
getch();
return;
}
else
{
// WE HAVE TO PUSH EACH ELEMENT ONE PLACE TOWARDS THE END OF THE ARRAY FROM LOCATION i
tail++;
for(j = tail ; j > i+1 ; j--)
list[j] = list[j-1];
printf("\n\tEnter the new node : ");
scanf("%d", &list[i+1]);
}
}
/*
ALGORITHM:
START
IF THE LIST IS FULL THEN
PRINT "CANNOT INSERT INTO THE LIST."
RETURN
ENDIF
PRINT "ENTER THE NODE AFTER WHICH YOU WANT TO INSERT THE NEW NODE : "
READ THE VALUE INTO VARIABLE 'INFO'

SEARCH THE VALUE IN THE LIST AND RETURN THE INDEX INTO VARIABLE 'I'

IF I = -1 THEN
PRINT "THE VALUE IS NOT FOUND."
RETURN
ELSE
INCREMENT TAIL
FOR J = TAIL TO I+2 STEP -1
LIST[J] = LIST[J-1]
ENDFOR
PRINT "ENTER THE NEW NODE : "
READ NEW VALUE INTO LIST[I+1]
ENDIF
STOP
*/


void DeleteAnyNode(void)
{
int i;
int info;
printf("\n\tEnter the node which you want to delete from the list : ");
scanf("%d", &info);
i = Search(info);
if(i == -1)
{
printf("\n\tThis value does not exist in the list.");
getch();
return;
}
else
{
if(head == tail) // IF THIS IS THE ONLY NODE
{
head = -1;
tail = -1;
}
else // IF NOT, WE HAVE TO PUSH THE ENTIRE ARRAY ONE PLACE TO THE LEFT TO OVERWRITE THE NODE
{
for( ; i <= tail ; i++)
list[i] = list[i+1];
tail--;
}
}
}
/*
ALGORITHM:
START
PRINT "ENTER THE NODE YOU WANT TO DELETE : "
READ THE VALUE INTO VARIABLE 'INFO'

SEARCH THE VALUE IN THE LIST AND RETURN THE INDEX INTO VARIABLE 'I'

IF I = -1 THEN
PRINT "THE VALUE IS NOT FOUND."
RETURN
ELSE
IF HEAD = TAIL THEN
HEAD = -1
TAIL = -1
ELSE
WHILE I <= TAIL
DO
LIST[I] = LIST[I+1]
INCREMENT I
DONE
DECREMENT TAIL
ENDIF
ENDIF
STOP
*/

void main(void)
{
int ch;

while(1)
{
clrscr();
printf("\n\n\n\t\tMENU");
printf("\n\t\t~~~~");
printf("\n\n\t\t 1. Insert Node At End");
printf("\n\t\t 2. Insert Node At Beginning");
printf("\n\t\t 3. Insert After A Given Node");
printf("\n\t\t 4. Display List");
printf("\n\t\t 5. Delete A Node");
printf("\n\t\t 6. Exit");
printf("\n\t\t\tEnter your choice : ");
scanf("%d", &ch);
fflush(stdin);

switch(ch)
{
case (1) : InsertAtEnd();
break;
case (2) : InsertAtBeginning();
break;
case (3) : InsertAfterAGivenNode();
break;
case (4) : Display();
break;
case (5) : DeleteAnyNode();
break;
case (6) : printf("\n\t\tExiting...");
getch();
exit(0);
default : printf("\n\t\tInvalid input.");
}
getch();
}
}
#include
#define SIZE 5

/*
MERGING OF TWO SORTED SUBARRAYS
*/

void main(void)
{
int a[SIZE], b[SIZE], c[2*SIZE];
int i, j, k;

clrscr();

printf("\n\tEnter into first array in ascending order : ");

for(i = 0 ; i < SIZE ; i++)
{
printf("\n\t\tEnter number : ");
scanf("%d", &a[i]);
}

printf("\n\tEnter into second array in ascending order : ");

for(j = 0 ; j < SIZE ; j++)
{
printf("\n\t\tEnter number : ");
scanf("%d", &b[j]);
}

i = 0;
j = 0;
k = 0;

while(i < SIZE && j < SIZE)
{
if(a[i] < b[j])
{
c[k] = a[i];
k++;
i++;
}
else if(a[i] > b[j])
{
c[k] = b[j];
k++;
j++;
}
else // DUPLICATES ARE NOT ALLOWED
{
c[k] = a[i];
k++;
i++;
j++;
}
}

if(i < SIZE)
{
while(i < SIZE)
{
c[k] = a[i];
k++;
i++;
}
}
else if(j < SIZE)
{
while(j < SIZE)
{
c[k] = a[j];
k++;
j++;
}
}

printf("\n\tThe merged array is : ");

for(i = 0 ; i < k ; i++)
printf(" %d", c[i]);

getch();
}

/*
ALGORITHM:
START
I = 0
J = 0
K = 0

WHILE I < SIZE AND J < SIZE
DO
IF A[I] < B[J] THEN
C[K] = A[I]
INCREMENT I, K
ELSE I A[I] > B[J] THEN
C[K] = B[J]
INCREMENT J, K
ELSE
C[K] = A[I]
INCREMENT I, J, K
ENDIF
DONE

IF I < SIZE THEN
WHILE I < SIZE
DO
C[K] = A[I]
INCREMENT I, K
DONE
ELSE IF I < SIZE THEN
WHILE I < SIZE
DO
C[K] = A[I]
INCREMENT I, K
DONE
ENDIF

STOP
*/

#include
#define SIZE 5

void main(void)
{
int arr[SIZE];
int i, *ptr;

clrscr();

printf("\n\tAddress of first element %p %p", &arr[0], arr);

ptr = arr; // ptr POINTING TO FIRST ELEMENT OF THE ARRAY

for(i = 0 ; i < SIZE ; i++)
{
printf("\n\tEnter a number : ");
scanf("%d", ptr);
ptr++;
}

ptr = &arr[0]; // ptr POINTING TO FIRST ELEMENT OF THE ARRAY

printf("\n\tYou have entered : ");

for(i = 0 ; i < SIZE ; i++)
{
printf("\t%d", *ptr);
ptr++;
}

getch();
}

#include
#define SIZE 5

void main(void)
{
int arr[SIZE];
int i;

clrscr();

for(i = 0 ; i < SIZE ; i++)
{
printf("\n\tEnter a number : ");
scanf("%d", arr+i);
}

printf("\n\tYou have entered : ");

for(i = 0 ; i < SIZE ; i++)
{
printf("\t%d", *(arr+i) );
}

getch();
}
#include
#define SIZE 3

/*
DIAGONAL MATRIX
*/

void main(void)
{
int arr[SIZE][SIZE];
int r, c;

clrscr();

for(r = 0 ; r < SIZE ; r++)
{
for(c = 0 ; c < SIZE ; c++)
arr[r][c] = 0;
}

printf("\n\tEnter values into the diagonal matrix : ");

for(r = 0 ; r < SIZE ; r++)
{
for(c = 0 ; c < SIZE ; c++)
{
if(r == c)
continue;
// IF ROW AND COLUMN ARE THE SAME, THEN NO INPUT IS ACCEPTED
else
{
printf("\n\tEnter into row %d and col %d : ", r, c);
scanf("%d", &arr[r][c]);
}
}
}

printf("\n\tYou have entered : ");

for(r = 0 ; r < SIZE ; r++)
{
printf("\n\t\t");
for(c = 0 ; c < SIZE ; c++)
printf("\t%d", arr[r][c]);
}

getch();
}
#include
#define SIZE 3

void main(void)
{
int a[SIZE][SIZE], b[SIZE][SIZE];
int r, c;

clrscr();

for(r = 0 ; r < SIZE ; r++)
{
printf("\n\tEnter for row index %d : ", r);
for(c = 0 ; c < SIZE ; c++)
{
printf("\n\t\tColumn %d : ", c);
scanf("%d", &a[r][c]);
}
}

printf("\n\tOriginal matrix :\n");
for(r = 0 ; r < SIZE ; r++)
{
printf("\n\t");
for(c = 0 ; c < SIZE ; c++)
{
printf("\t%d", a[r][c]);
}
}

for(r = 0 ; r < SIZE ; r++) // TRANSPOSE, INTERCHANGING THE r AND c OF THE TWO MATRICES
{
for(c = 0 ; c < SIZE ; c++)
{
b[r][c] = a[c][r];
}
}

/*
ALGORITHM:
ASSUMING THERE ARE SQUARE MATRICES "A" AND "B" WITH DIMENSION "SIZE"
START
FOR R = 0 TO SIZE-1 STEP 1
FOR C = 0 TO SIZE-1 STEP 1
B[R][C] = A[R][C]
ENDFOR
ENDFOR
STOP
*/

printf("\n\tTransposed matrix :\n");
for(r = 0 ; r < SIZE ; r++)
{
printf("\n\t");
for(c = 0 ; c < SIZE ; c++)
{
printf("\t%d", b[r][c]);
}
}

for(r = 0 ; r < SIZE ; r++) // ROTATION
{
for(c = 0 ; c < SIZE ; c++)
{
b[c][SIZE-1 - r] = a[r][c];
}
}

/*
ALGORITHM:
ASSUMING THERE ARE SQUARE MATRICES "A" AND "B" WITH DIMENSION "SIZE"
START
FOR R = 0 TO SIZE-1 STEP 1
FOR C = 0 TO SIZE-1 STEP 1
B[C][SIZE-1-R] = A[R][C]
ENDFOR
ENDFOR
STOP
*/

printf("\n\tRotated matrix :\n");
for(r = 0 ; r < SIZE ; r++)
{
printf("\n\t");
for(c = 0 ; c < SIZE ; c++)
{
printf("\t%d", b[r][c]);
}
}

getch();
}

Wednesday, September 15, 2010

C PLUS PLUS

Virtual Function and Dynamic Binding

Before looking at what virtual function is, let us discuss a little further about polymorphism once again.

Polymorphism in C++ comes in two forms. One is called compile time polymorphism, and its examples are function overloading and operator overloading. The reason for calling it compile time polymorphism is that when we create multiple version of the same function, the issues of linking a function call to the appropriate version of function definition is decided by the compiler at the time of creating the object code.

1) For function overloading, the compiler looks at the argument set of the function call and finds out the appropriate function definition by matching the argument set at the function definition.
2) If we are using function overriding, then the compiler finds out the class of the calling object, and then links the function call to the function definition of the class to which the calling object belongs.
3) Likewise if we are using operator overloading, the compiler looks at the operand(s) of the operator. If it finds out that the operators belong to the built-in data type, then the operator is compiled with the system defined meaning. If the operands are found to be of user defined type then the operator is called with the user defined meaning.

In all the three cases described above, the version of the function or the operator to be used is decided by the compiler at the time of compilation of the program, that is, before the program starts to execute. For this reason, compile time polymorphism is also known as early binding or static binding.

The other type of polymorphism supported by C++ is called runtime polymorphism. This is implemented by something known as the virtual function. The virtual function is a function that is declared in the base class with the keyword “virtual” (this is another use of the virtual keyword apart from creating hybrid inheritance). After this is done, the virtual function is overridden in a derived class. To call these functions from main( ), the programmer is first required to create a pointer of the base class. This base class pointer is capable of containing the address of an object of both the base and the derived class.

If we want to call a virtual function, first we have to store the address of the object of that class whose version of the virtual function we want to execute. When a virtual function is called, the only thing that matters is the class of the object whose address is stored inside the pointer at the moment of calling the virtual function. If we want to execute the base class’s virtual function, then the base class pointer should store the address of the base class object. Similarly, if we want to execute the derived class’s virtual function, then the base class pointer should store the address of the derived class object. The rest is simple. Just execute the virtual function using the base class pointer.

This is called runtime polymorphism, because the storage of a physical memory address into a pointer is possible only when the program is in a running condition. The complier has no way to know the physical address at the time of compilation. For this reason, runtime polymorphism is also known as late binding or dynamic binding.
Consider the following program to understand the difference between the two types of polymorphism.

#include
#include

class Base
{
public :
void Display() // NORMAL FUNCTION
{
cout<<"\n\tDisplay Base.";
}
virtual void Show()
// VIRTUAL FUCNTION, DECLARED WITH THE KEYWORD "virtual"
{
cout<<"\n\tShow Base.";
}
};

class Derived : public Base
{
public :
void Display() // OVERRIDDING THE NORMAL FUNCTION
{
cout<<"\n\tDisplay Derived.";
}
void Show()
// OVERRIDDING THE VIRTUAL FUCNTION, KEYWORD "virtual" IS NOT NEEDED
// HERE
{
cout<<"\n\tShow Derived.";
}
};

void main(void)
{
Base *bptr; // POINTER OF THE BASE CLASS

Base b;
Derived d;

clrscr();

bptr = &b; // STORING THE ADDRESS OF THE BASE CLASS OBJECT

bptr->Display(); // THIS WILL PRINT "Display Base."
bptr->Show(); // THIS WILL PRINT "Show Base."

bptr = &d; // STORING THE ADDRESS OF THE DERIVED CLASS OBJECT

bptr->Display(); // THIS WILL STILL PRINT "Display Base."
bptr->Show(); // THIS WILL PRINT "Show Derived."

getch();
}

See the output first. The output of the Display( ) function both the times have been “Display Base.” Display( ) is a normal function. For normal overridden function all the compiler sees is the nature of the object that is calling this function. In both the cases the calling object is bptr, which is a pointer of the Base class. As a result, the compiler treats this as a case of compile time polymorphism and binds both the function calls to the base class implementation of the Display( ) function.

The case of the Show( ) function is a little different. First of all, it is a virtual function. When a virtual function is called, it has to be called using a pointer object. When this function is being called for the first time, the base class pointer is storing the address of the base class object. This means that the base class’s version of the virtual function will be executed. Next time the base class pointer is storing the address of the derived class object. This means that the derived class’s version of the virtual function will be executed. The compiler cannot do anything for linking the virtual function calls to the virtual function definition, as this happens at runtime.


Pure Virtual Function

It is a common practice to declare a function as virtual in the base class without defining its body. The classes which might inherit from this base class would be responsible to override the undefined virtual function inside their own scope. The virtual function in this case acts merely as a place holder, or a skeleton. Consider the following program.

#include
#include

class Base
{
public :
virtual void greetings()
{
// BODY OF THE VIRTUAL FUNCTION NOT DEFINED IN THE BASE CLASS
}
};

class Derived1 : public Base
{
public :
void greetings()
{
cout<<"\n\tHello!!!";
}
};

class Derived2 : public Base
{
public :
void greetings()
{
cout<<"\n\tHi!!!";
}
};

void main(void)
{
Base *bptr; // POINTER OF THE BASE CLASS

Derived1 d1;
Derived2 d2;

clrscr();

bptr = &d1;
bptr->greetings();

bptr = &d2;
bptr->greetings();

getch();
}

The pure virtual function’s body has not been defined in the base class, but has been defined inside the two derived classes that inherit from the base class. The rest of the program is very similar to the program we have done previously.

C PLUS PLUS

Virtual Function and Dynamic Binding

Before looking at what virtual function is, let us discuss a little further about polymorphism once again.

Polymorphism in C++ comes in two forms. One is called compile time polymorphism, and its examples are function overloading and operator overloading. The reason for calling it compile time polymorphism is that when we create multiple version of the same function, the issues of linking a function call to the appropriate version of function definition is decided by the compiler at the time of creating the object code.

1) For function overloading, the compiler looks at the argument set of the function call and finds out the appropriate function definition by matching the argument set at the function definition.
2) If we are using function overriding, then the compiler finds out the class of the calling object, and then links the function call to the function definition of the class to which the calling object belongs.
3) Likewise if we are using operator overloading, the compiler looks at the operand(s) of the operator. If it finds out that the operators belong to the built-in data type, then the operator is compiled with the system defined meaning. If the operands are found to be of user defined type then the operator is called with the user defined meaning.

In all the three cases described above, the version of the function or the operator to be used is decided by the compiler at the time of compilation of the program, that is, before the program starts to execute. For this reason, compile time polymorphism is also known as early binding or static binding.

The other type of polymorphism supported by C++ is called runtime polymorphism. This is implemented by something known as the virtual function. The virtual function is a function that is declared in the base class with the keyword “virtual” (this is another use of the virtual keyword apart from creating hybrid inheritance). After this is done, the virtual function is overridden in a derived class. To call these functions from main( ), the programmer is first required to create a pointer of the base class. This base class pointer is capable of containing the address of an object of both the base and the derived class.

If we want to call a virtual function, first we have to store the address of the object of that class whose version of the virtual function we want to execute. When a virtual function is called, the only thing that matters is the class of the object whose address is stored inside the pointer at the moment of calling the virtual function. If we want to execute the base class’s virtual function, then the base class pointer should store the address of the base class object. Similarly, if we want to execute the derived class’s virtual function, then the base class pointer should store the address of the derived class object. The rest is simple. Just execute the virtual function using the base class pointer.

This is called runtime polymorphism, because the storage of a physical memory address into a pointer is possible only when the program is in a running condition. The complier has no way to know the physical address at the time of compilation. For this reason, runtime polymorphism is also known as late binding or dynamic binding.
Consider the following program to understand the difference between the two types of polymorphism.

#include
#include

class Base
{
public :
void Display() // NORMAL FUNCTION
{
cout<<"\n\tDisplay Base.";
}
virtual void Show()
// VIRTUAL FUCNTION, DECLARED WITH THE KEYWORD "virtual"
{
cout<<"\n\tShow Base.";
}
};

class Derived : public Base
{
public :
void Display() // OVERRIDDING THE NORMAL FUNCTION
{
cout<<"\n\tDisplay Derived.";
}
void Show()
// OVERRIDDING THE VIRTUAL FUCNTION, KEYWORD "virtual" IS NOT NEEDED
// HERE
{
cout<<"\n\tShow Derived.";
}
};

void main(void)
{
Base *bptr; // POINTER OF THE BASE CLASS

Base b;
Derived d;

clrscr();

bptr = &b; // STORING THE ADDRESS OF THE BASE CLASS OBJECT

bptr->Display(); // THIS WILL PRINT "Display Base."
bptr->Show(); // THIS WILL PRINT "Show Base."

bptr = &d; // STORING THE ADDRESS OF THE DERIVED CLASS OBJECT

bptr->Display(); // THIS WILL STILL PRINT "Display Base."
bptr->Show(); // THIS WILL PRINT "Show Derived."

getch();
}

See the output first. The output of the Display( ) function both the times have been “Display Base.” Display( ) is a normal function. For normal overridden function all the compiler sees is the nature of the object that is calling this function. In both the cases the calling object is bptr, which is a pointer of the Base class. As a result, the compiler treats this as a case of compile time polymorphism and binds both the function calls to the base class implementation of the Display( ) function.

The case of the Show( ) function is a little different. First of all, it is a virtual function. When a virtual function is called, it has to be called using a pointer object. When this function is being called for the first time, the base class pointer is storing the address of the base class object. This means that the base class’s version of the virtual function will be executed. Next time the base class pointer is storing the address of the derived class object. This means that the derived class’s version of the virtual function will be executed. The compiler cannot do anything for linking the virtual function calls to the virtual function definition, as this happens at runtime.


Pure Virtual Function

It is a common practice to declare a function as virtual in the base class without defining its body. The classes which might inherit from this base class would be responsible to override the undefined virtual function inside their own scope. The virtual function in this case acts merely as a place holder, or a skeleton. Consider the following program.

#include
#include

class Base
{
public :
virtual void greetings()
{
// BODY OF THE VIRTUAL FUNCTION NOT DEFINED IN THE BASE CLASS
}
};

class Derived1 : public Base
{
public :
void greetings()
{
cout<<"\n\tHello!!!";
}
};

class Derived2 : public Base
{
public :
void greetings()
{
cout<<"\n\tHi!!!";
}
};

void main(void)
{
Base *bptr; // POINTER OF THE BASE CLASS

Derived1 d1;
Derived2 d2;

clrscr();

bptr = &d1;
bptr->greetings();

bptr = &d2;
bptr->greetings();

getch();
}

The pure virtual function’s body has not been defined in the base class, but has been defined inside the two derived classes that inherit from the base class. The rest of the program is very similar to the program we have done previously.

C ++

Inheritance

Inheritance is an integral feature of any object oriented programming language. Inheritance is the process of creating a new class out of an existing class. In C++ terminology, the existing class is called the base class and the newly created class is called derived class. Due to inheritance the subclass may inherit some or all the features of the base class.

Whenever there is a need to create a new class, the programmer has two choices. Either he/she can create the class, its attributes and the functions out of scratch, that may take a lot of labour, time (and thereby, money). Another option is to look out in the collection of already created classed and trying to find out a class that most closely matches the attribute set and the function set of the class that is required to be created. If such a class can be found, then all that is needed to be done is to create the new class (derived class) out of the already created class (base class) and directly inherit the common attributes and the functions, and add only those attributes and functions which are needed specifically for the derived class. This may save a lot of labour, time and money.

In a sense, inheritance encourages the concept of code reuse, that is, to reuse the existing code to create new code units.

Inheritance also gives birth to another pair of terms called generalization and specialization. In a class hierarchy, where a set of base class gives birth to a set of derived classes in a top down manner, the more we climb upwards this inheritance tree, the fewer amounts of details would be visible. The top levels of the class hierarchy always present those features which are common to all the underlying classes. This is called generalization. On the other if we descend the inheritance tree, we would reach the derived classes that keeps adding newer features. So, more specific features will be visible in the bottom levels of the class hierarchy. This is called specialization.

Types of inheritance supported in C++

C++ supports the five types of inheritance depicted here.

Single Inheritance

Only one derived class is created out of only one base class.











Multiple Inheritance

One class is created out of more than one class. It means the derived class inherits the features of both the parents.








Multilayer or Multilevel Inheritance

One class creates another class, that class creates another class. The ultimate derived class inherits from the father as well as the grandfather.









Hierarchical Inheritance

One class creates more than one class. Essentially, this is single inheritance done more than once.







Hybrid Inheritance

One class gives birth to more than one class. These classes then give birth to a single class. This is called hybrid inheritance because the top level looks like hierarchical inheritance and the bottom level looks like multiple inheritance.







The Access Specifiers

C++ has three access specifiers, namely private, public and protected. We have already discussed about the private and the public in the first chapter. Let us again discuss them from the point of view of how they can be used in inheritance.

private: a private member as we already know, cannot be accessed outside the class and can never be inherited.
protected: a protected member is by nature private, meaning that they cannot be accessed outside the class. However they can be inherited by a derived class. Using protected mode, we can make private members inheritable.
public: a public member can be accessed outside the class as well as can be inherited by a derived class.

Not only the members themselves can have these three access specifiers, but the mode of inheritance can also be private, protected or public. The effect of these three modes of inheritance on the three types of members can be summarized in the following diagram.

Case 1: Inheritance in private mode

class Base
{
private:
private members; // CAN NEVER BE INHERITED
protected:
protected members;
public:
public members;
};

class Derived1 : private Base
{
private:
private members;

};

In other words, for a private mode inheritance, the protected and public members of the base class become the private members of the derived class.


Case 2: Inheritance in protected mode

class Base
{
private:
private members; // CAN NEVER BE INHERITED
protected:
protected members;
public:
public members;
};

class Derived2 : protected Base
{
protected:
protected members;

};

In other words, for a protected mode inheritance, the protected and public members of the base class become the protected members of the derived class.

Case 3: Inheritance in public mode

class Base
{
private:
private members; // CAN NEVER BE INHERITED
protected:
protected members;
public:
public members;
};

class Derived3 : public Base
{
protected:
protected members;
public:
public members;
};

In other words, for a public mode inheritance, the protected members of the base class become the protected members of the derived class and public members of the base class become the public members of the derived class.

Single Inheritance

Let us now explain using some real program. The following program describes a single inheritance.

#include
#include

class A
{
int a;
public:
void input_a();
void output_a();
};

class B : public A // PUBLIC MODE INHERITANCE
{
int b;
public:
void input_b();
void output_b();
};

void A :: input_a()
{
cout<<"\n\tEnter first number : ";
cin>>a;
}

void A :: output_a()
{
cout<<"\n\tFirst number : "<}

void B :: input_b()
{
cout<<"\n\tEnter second number : ";
cin>>b;
}

void B :: output_b()
{
cout<<"\n\tSecond number : "<}









void main(void)
{
B obj;

clrscr();

obj.input_a(); // ALL FOUR ARE PUBLIC FUNCTIONS OF CLASS B
obj.input_b();
obj.output_a();
obj.output_b();

getch();
}

Watch this program carefully. The class A is the base class that has one integer attribute and functions to perform input output operations on it. When we have been asked to create a class that has two integer attributes and input output functions, instead of creating the class out of scratch, we are inheriting the derived class B from class A. This is a public mode inheritance, so the public functions of class A, namely input_a( ) and output_a( ) have become the public functions of class B. We may say that class B has four public functions. As a result, in the main( ) function, the object of class B is calling all the four functions, two of them have been inherited from the parent class, and the rest are the member functions of its own.

What if we change the program a little and make it private mode inheritance? The result would be like the following.

#include
#include

class A
{
int a;
public:
void input_a();
void output_a();
};

class B : private A // PRIVATE MODE INHERITANCE
{
int b;
public:
void input_b();
void output_b();
};

void A :: input_a()
{
cout<<"\n\tEnter first number : ";
cin>>a;
}

void A :: output_a()
{
cout<<"\n\tFirst number : "<}

void B :: input_b()
{
input_a(); // CALLING THIS FUCNTION AS A PRIVATE FUNCTION
cout<<"\n\tEnter second number : ";
cin>>b;
}

void B :: output_b()
{
output_a(); // CALLING THIS FUCNTION AS A PRIVATE FUNCTION
cout<<"\n\tSecond number : "<}







void main(void)
{
B obj;

clrscr();

obj.input_b();
obj.output_b();

getch();
}

When this is a private mode inheritance, the public functions of class A have become the private functions of class B. In essence, class B now has two private functions, namely input_a( ) and output_a( ), and two public functions, namely input_b( ) and output_b( ) . We know that a private function can call its own private functions from the body of the public function, so the functions input_a( ) and output_a( ) have been called directly from inside the body of input_b( ) and output_b( ).

The program for a protected mode inheritance would roughly be the same. Please try that on your own.

How do the Constructors work in Inheritance?

We know that the constructors are called whenever the object of a class is declared, in order to initialize the attributes of the class. In the above example we have seen that we are declaring the object of the derived class only, which may result in the derived class constructor getting executed. But since the base class has attribute of its own, it is necessary to execute the constructor of the base class also, though we are not declaring any object of the base class. So, there is a need to link the derived class constructor and the base class constructor so that when the derived class constructor will be executed, the base class constructor will also be called. The following code shows how this can be done. This is a continuation of the single inheritance program.

#include
#include

class A
{
int a;
public:
A(int a1) // PARAMETERIZED CONSTRUCTOR
{
a = a1;
}
void output();
};

class B : public A
{
int b;
public:
B(int a1, int b1) : A(a1)
// LINKING THE DERIVED CLASS CONSTRUCTOR TO THAT OF THE BASE CLASS
{
b = b1;
}
void output();
};

void A :: output()
{
cout<<"\n\tFirst number : "<}

void B :: output()
{
cout<<"\n\tSecond number : "<}

void main(void)
{
B obj(100, 200);

clrscr();

obj.A::output();
obj.output();

getch();
}

The object created for class B calls the parameterized constructor of class B, because it passed to integer constants (100 and 200) to the constructor of the class B. The constructor of class B keeps one of these two values (200) for itself and passes the other value to the parameterized constructor of the parent class A. Please watch carefully how the constructors of the derived and the base class have been linked in the statement:

B(int a1, int b1) : A(a1)

Function Overriding

This program also stands as an example of what is called function overriding. We have already discussed function overloading in the earlier chapter, which means redefining the same function having the same name with different signature and different body. Function overriding is the technique of redefining a function of the base class in the body of a subclass derived from that base class. In this example, we can see that the output( ) member function of the base class A has been redefined in the derived class B. This is a case of polymorphism, where two different versions of the same functions have been created in base class and derived class. Therefore an ambiguity problem may be generated, where the compiler is unable to understand which of the two versions of the same function will have to be executed. The issue of calling the particular version of the overridden function is resolved by the type of the object that calls this overridden function. If an object of the base class calls the output( ) function, then the version in the base class will be executed. If an object of the derived class calls the output( ) function, then the version in the derived class will be executed. In our example, we are using only one object, that of class B. Using this object to call the output( ) function will result in the derived class function getting called. If we want to use that same object to call the class A’s output( ) function, then we have to use the statement

obj.A::output();

Using scope resolution operator tells the complier to execute the output( ) function declared in class A.










Multiple Inheritance

The following program shows how we can create a new class using multiple inheritance.

#include
#include

class A
{
int a;
public:
A(int a1) // PARAMETERIZED CONSTRUCTOR
{
a = a1;
}
void output();
};

class B
{
int b;
public:
B(int b1) // PARAMETERIZED CONSTRUCTOR
{
b = b1;
}
void output();
};

class C : public A, public B
{
int c;
public:
C(int a1, int b1, int c1) : A(a1), B(b1)
// LINKING THE DERIVED CLASS CONSTRUCTOR TO THAT OF THE BASE CLASSES
{
c = c1;
}
void output();
};

void A :: output()
{
cout<<"\n\tFirst number : "<}

void B :: output()
{
cout<<"\n\tSecond number : "<}

void C :: output()
{
cout<<"\n\tThird number : "<}

void main(void)
{
C obj(100, 200, 300);

clrscr();

obj.A::output();
obj.B::output();
obj.output();

getch();
}

The program is pretty self explanatory. Just watch how the constructors of the derived and the two base classes have been linked in the statement:

C(int a1, int b1, int c1) : A(a1), B(b1)



Multilayer or Multilevel Inheritance

The following program shows how we can create a new class using multilayer or multilevel inheritance.

#include
#include

class A
{
int a;
public:
A(int a1)
{
a = a1;
}
void output();
};

class B : public A
{
int b;
public:
B(int a1, int b1) : A(a1)
{
b = b1;
}
void output();
};

class C : public B
{
int c;
public:
C(int a1, int b1, int c1) : B(a1, b1)
{
c = c1;
}
void output();
};

void A :: output()
{
cout<<"\n\tFirst number : "<}

void B :: output()
{
cout<<"\n\tSecond number : "<}

void C :: output()
{
cout<<"\n\tThird number : "<}

void main(void)
{
C obj(100, 200, 300);

clrscr();
obj.A::output();
obj.B::output();
obj.output();

getch();
}










Hierarchical Inheritance

The following program shows how we can create a new class using hierarchical inheritance.

#include
#include

class A
{
int a;
public:
A(int a1)
{
a = a1;
}
void output();
};

class B : public A
{
int b;
public:
B(int a1, int b1) : A(a1)
{
b = b1;
}
void output();
};

class C : public A
{
int c;
public:
C(int a1, int c1) : A(a1)
{
c = c1;
}
void output();
};

void A :: output()
{
cout<<"\n\tFirst number : "<}

void B :: output()
{
cout<<"\n\tSecond number : "<}

void C :: output()
{
cout<<"\n\tThird number : "<}

void main(void)
{
B obj1(10, 20);
C obj2(100, 200);

clrscr();

cout<<"\n\n\tObject 1:";
obj1.A::output();
obj1.output();

cout<<"\n\n\tObject 2:";
obj2.A::output();
obj2.output();

getch();
}





Hybrid Inheritance

Now, this is the tricky one. The problem associated with hybrid inheritance is that the same features of the base class A will be inherited to the ultimate derived class D through multiple paths. This is different from the overriding problem, because in overriding, although the name of the function is the same, they are really different versions of the same named function, so the compiler can differentiate between them. But in hybrid inheritance, a function in class A reaches class D once through B and once through C. This is not the different version of the same function, but the same function itself. The ambiguity problem is more acute in this case.













The solution here is to use a keyword “virtual” while declaring the middle level classes, i.e. classes B and C.

#include
#include

class A
{
int a;
public:
void input();
void output();
};

class B : public virtual A
{
int b;
public:
void input();
void output();
};

class C : public virtual A
{
int c;
public:
void input();
void output();
};

class D : public B, public C
{
int d;
public:
void input();
void output();
};

void A :: input()
{
cout<<"\n\tEnter first number : ";
cin>>a;
}

void A :: output()
{
cout<<"\n\tFirst number : "<}

void B :: input()
{
cout<<"\n\tEnter second number : ";
cin>>b;
}

void B :: output()
{
cout<<"\n\tSecond number : "<}

void C :: input()
{
cout<<"\n\tEnter third number : ";
cin>>c;
}

void C :: output()
{
cout<<"\n\tThird number : "<}

void D :: input()
{
cout<<"\n\tEnter fourth number : ";
cin>>d;
}

void D :: output()
{
cout<<"\n\tFourth number : "<}

void main(void)
{
D obj;

clrscr();

obj.A::input();
obj.B::input();
obj.C::input();
obj.input();

obj.A::output();
obj.B::output();
obj.C::output();
obj.output();

getch();
}

The middle level classes have been declared with the “virtual” keyword as in
class B : public virtual A and class C : public virtual A. The “public” and “virtual” keywords can be written in any order, i.e. class B : virtual public A can also be used without any change in meaning.

Order of calling the Constructor and Destructor

In case we are calling the constructors of a class hierarchy, the constructors are always called in the order of the derivation. This means that the base class constructor will be called first and the derived class constructor will be called later. On the other hand, the destructors are called in the reverse order of the derivation. This means that the derived class destructor will be called before the base class destructor. Run the following program to see the result.




#include
#include

class A
{
int a;
public:
A(int a1)
{
a = a1;
clrscr();
cout<<"\n\tFirst number : "< }
~A() // DESTRUCTOR
{
cout<<"\n\tDestructor for class A.";
getch();
}
};

class B
{
int b;
public:
B(int b1)
{
b = b1;
cout<<"\n\tSecond number : "< }
~B()
{
cout<<"\n\tDestructor for class B.";
getch();
}
};

class C : public A, public B
{
int c;
public:
C(int a1, int b1, int c1) : A(a1), B(b1)
{
c = c1;
cout<<"\n\tThird number : "< }
~C()
{
cout<<"\n\tDestructor for class C.";
getch();
}
};

void main(void)
{
C obj(100, 200, 300);

getch();
}