Monday, November 16, 2009

Hub

In a hub, a frame is passed along or "broadcast" to every one of its ports. It doesn't matter that the frame is only destined for one port. The hub has no way

of distinguishing which port a frame should be sent to. Passing it along to every port ensures that it will reach its intended destination. This places a lot of traffic on the network and can lead to poor network response times.
Additionally, a 10/100Mbps hub must share its bandwidth with each and every one of its ports. So when only one PC is broadcasting, it will have access to the maximum available bandwidth. If, however, multiple PCs are broadcasting, then that bandwidth will need to be divided among all of those systems, which will degrade performance. 









For easy of use, both the FastEthernet card and such Dual-Speed hubs are usually configured
for automatic configuration : They are supposed to detect the type of hub or network card and
then select automatically the proper connection speed.

That works in most installations very well, but (like usual) NOT always:
sometimes the hub and the network card do not work properly together and the automatic
configuration fails: the network does not work (as it can be tested via the TCP/IPcommand).

As a first diagnostics, have a look at the indicators on your hub:










switch


A switch is a device that performs switching. Specifically, it forwards and filters OSI layer 2 datagrams (chunk of data communication) between ports (connected cables) based on the Mac-Addresses in the packets. This is distinct from a hub in that it only forwards the datagrams to the ports involved in the communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3) which is necessary for communicating between network segments or within a large or complex LAN. Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports with the intention that most or all of the network be connected directly to a switch, or another switch that is in turn connected to a switch.
Switches is a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch.
Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept. Switch 5530-24TFD is a next-generation stackable 10/100/1000/10000 Mbps Ethernet Layer 3 routing switch designed to provide high-density Gigabit desktop connectivity and Gigabit and 10 Gigabit fiber connectivity for aggregation for mid-size and large enterprise customers’ wiring closets. It combines higher flexibility of deployment using Gigabit copper or fiber connections coupled with exceptional performance utilizing dual 10 Gigabit uplinks.
The Ethernet Routing Switch 5530-24TFD provides with 24 10/100/1000BASE-T RJ-45 ports, 12 shared Small Form-factor Pluggable (SFP) slots, and 2 slots for 10 Gigabit Ethernet Small Form-factor Pluggable (XFP) modules. The switch includes two built-in stacking ports in a compact 1 rack-unit high design. The Ethernet Routing Switch 5530-24TFD may be utilized in standalone mode, or may be stacked together in a mixed stack of 8 units with existing Ethernet Routing Switch 5510-24T/48T or 5520-24T/48T-PWR devices.

Routers


Routers are networking devices that forward data packets between networks using headers and forwarding tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media (RFC 1812). This is accomplished by examining the Header of a data packet, and making a decision on the next hop to which it should be sent (RFC 1812) They

use preconfigured static routes, status of their hardware interfaces, and routing protocols to select the best route between any two subnets. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some DSL and cable modems, for home (and even office) use, have been integrated with routers to allow multiple home/office computers to access the Internet through the same connection. Many of these new devices also consist of wireless access points (waps) or wireless routers to allow for IEEE 802.11b/g wireless enabled devices to connect to the network without the need for a cabled connection it has some new Wi-Fi products on the market as part of the new LinkSys RangePlus family which promises to increase the range of your home wireless network at affordable prices. The new products include a wireless router (WRT100), notebook adapter (WPC100), PCI adapter (WMP100) and USB Notebook adapter (WUSB100).
The WRT100 router  has dropped the Linksys blue color and has taken the flat black look more common of its Cisco parent. The family of RangePlus products uses Multiple Input, Multiple Output (MIMO) technology to provide better coverage in a larger area with fewer dead spots. Although Linksys would like you to buy all the products as they are designed to work together, they will likely support the standard 802.11 b/g. The products do however feature some added

features unique to Linksys that make setup a little easier.
The Linksys Easy Link Advisor on the WRT100 router is a software-based “wizard” that steps users through illustrated instructions for setting up a home network. Also, something called WiFi Protected Setup which simply comes down to a button on the different devices that can be pressed to automatically connect them together.
The WRT100 router and WPC100 notebook adapter are both available now for $99.99 each, and the other RangePlus pricing and products will be coming shortly with availability before the end of the year.

Wide Area network


A WAN is a data communications network that covers a relatively broad geographic area (i.e. one city to another and one country to another country) and that often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.

A Wide Area Network ( WAN) is a computer network covering multiple distance areas, which may spread across the entire world. WANs often connect multiple smaller networks, such as local area networks (LANs) or metro area networks (MANs). The world's most popular WAN is the Internet. Some segments of the Internet are also WANs in themselves. The key difference between WAN and LAN technologies is scalability C WAN must be able to grow as needed to cover multiple cities, even countries and continents.

Both packet switching and circuit switching technologies are used in the WAN. Packet switching allows users to share common carrier resources so that the carrier can make more efficient use of its infrastructure. In a packet switching setup, networks have connections into the carrier's network, and many customers share the carrier's network. The carrier can then create virtual circuits between customers' sites by which packets of data are delivered from one to the other through the network.
Circuit Switching allows data connections to be established when needed and then terminated when communication is complete. This works like a normal telephone line works for voice communication. Integrated Services Digital Network (ISDN) is a good example of circuit switching. When a router has data for a remote site, the switched circuit is initiated with the circuit number of the remote network.
 


Global Area Network (GAN)


Global area networks (GAN) specifications are in development by several groups, and there is no common definition. In general, however, a GAN is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is "handing off" the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial Wireless local area networks (WLAN).

metropolition Network


MAN is a bigger version of a LAN and uses similar technology. It uses one or two cables but does not contain switching elements. It covers an entire city and may be related to the local cable TV network. A MAN standard - DQDB (Distributed Queue Dual Bus) IEEE 802.6, as shown in Figure 4: • Two unidirectional buses. • Each bus has a head-end, which initiates transmission activity. • Traffic to the right uses the upper bus. • Traffic to the left uses the lower bus.
In recent years, SONET/SDH-based transport networks have come to be considered as too inflexible, inefficient, and overly complex for the purposes of data communication. As the importance of data communication has increased, a search has begun for a replacement for SONET/SDH. However, developing such a replacement is neither easy nor straightforward. On the contrary, many useful functions provided by the SONET/SDH appear to be too complicated to reinvent in other way. Furthermore, long-established telecommunications companies have already invested billions of euros in their SDH networks, and would therefore prefer to utilise the existing infrastructure.
Fortunately, a new solution based on proven SDH/SONET technology is evolving, and promises to turn SONET/SDH into an efficient multi-service transport network that is easy to manage and provision. 'Data over SONET/SDH' (DoS) is based on three new features in SONET/SDH networks: VC (Virtual Concatenation), LCAS (Link Capacity Adjustment Scheme), and GFP (Generic Framing Procedure). VC and LCAS together enable fine-grained capacity allocation and management. Efficient framing and link-layer statistical multiplexing is achieved using GFP, which provides a unified method to multiplex packets from diverse sources. Furthermore, DoS-capable equipment can be mixed with older, inflexible SDH equipment to provide a reasonable evolutionary upgrade path for 'traditional' network operators.
The ability to provide optical connections rapidly and dynamically while making optimal use of network resources is also important. This can be achieved by adding intelligence to a traditional optical transport network and updating it to an Automatic Switched Transport Network (ASTN).





Through the OAN project, a network evaluation platform was developed. The objective was to design and implement the electrical parts of the feeder network, including all the networking activities starting from the link layer.
OAN Platform
The OAN network acts as a feeder network, connecting multiple subscribers to a core network. The test network prototype (Figures 2 and 3) exploits two counter-rotating 2.5 Gbit/s SDH rings in the transport network side, and two Gigabit Ethernet links together with one 2.5 Gbit/s SDH link in the access network side. The network functionality is implemented in Field Programmable Gate Arrays (FPGAs), which enable flexible network design and system testing. The physical equipment of the OAN node is fitted into a standard CompactPCI–frame, making the node easy to move and install. The CompactPCI-frame includes a Central Processing Unit (CPU) card, which is used by the management software.
Figure 2: Access node architecture.
Figure 2: Access node architecture.
Figure 3: An OAN prototype node is fitted into a standard CompactPCI-frame.
Figure 3: An OAN prototype node is fitted into a standard CompactPCI-frame.
The access network and transport network interfaces are implemented into separate interface and router cards respectively. Data between the interface and router cards is transported using a dedicated bus system and specific transport control. Flexible design of the system allows the interconnection of multiple router and interface cards. By developing new interface card versions, multiple technologies can be supported in the access network side.
To obtain both efficient optical protection and a cost-effective structure, the OAN network is physically a ring and logically a star. The OAN prototype network consists only of three to four nodes, providing low network complexity. One node acts as a hub, connecting other nodes to the core network and to each other. The connections are established using WDM (Wavelength Division Multiplexing) technology.
Furthermore, a network-monitoring system for the OAN network was designed. In the design the usability of Simple Network Management Protocol (SNMP), standardised Management Information Bases (MIBs), and ready-to-use management software was considered.
Next-Generation SONET/SDH in Use
The next-generation SONET/SDH enables new types of services with more efficient network usage to be easily implemented by utilising existing infrastructure.
Corporations require diverse services (eg voice, VPN, data storage, and Internet connection services) from operators. Traditionally the different services are provided through technology-specific transport pipes. However, the next-generation SDH enables the simultaneous transport of heterogeneous services over one wavelength, thereby saving network-building and maintenance costs.
Usually a virtual private connection (VPN) is used to bridge operators' access points. In some applications however, it is desirable to transport the native network signal without extracting packets or frames. Normally the datacom protocols rely on 8B/10B coding, which causes a 25 percent increase in bandwidth. Using the next-generation SDH, which maps 8B/10B-coded data into 64B/65B-coded sequences, the required bandwidth is substantially decreased.






Local Area Network


A LAN is a high-speed data network that covers a relatively small geographic area. It typically connects workstations, personal computers, printers, servers, and other devices. LANs offer computer users many advantages, including shared access to devices and applications, file exchange between connected users, and communication between users via electronic mail and other applications.
Protocols and the OSI Reference Model
LAN protocols function at the lowest two layers of the OSI reference model, as discussed in Chapter 1, "Internetworking Basics," between the physical layer and the data link layer. Figure 2-2 illustrates how several popular LAN protocols map to the OSI reference model.
Figure 2-2 Popular LAN Protocols Mapped to the OSI Reference Model

Media-Access Methods
Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing.
In networks using CSMA/CD technology such as Ethernet, network devices contend for the network media. When a device has data to send, it first listens to see if any other device is currently using the network. If not, it starts sending its data. After finishing its transmission, it listens again to see if a collision occurred. A collision occurs when two devices send data simultaneously. When a collision happens, each device waits a random length of time before resending its data. In most cases, a collision will not occur again between the two devices. Because of this type of network contention, the busier a network becomes, the more collisions occur. This is why performance of Ethernet degrades rapidly as the number of devices on a single network increases.

In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network from device to device. When a device has data to send, it must wait until it has the token and then sends its data. When the data transmission is complete, the token is released so that other devices may use the network media. The main advantage of token-passing networks is that they are deterministic. In other words, it is easy to calculate the maximum time that will pass before a device has the opportunity to send data. This explains the popularity of token-passing networks in some real-time environments such as factories, where machinery must be capable of communicating at a determinable interval.
For CSMA/CD networks, switches segment the network into multiple collision domains. This reduces the number of devices per network segment that must contend for the media. By creating smaller collision domains, the performance of a network can be increased significantly without requiring addressing changes.
Normally CSMA/CD networks are half-duplex, meaning that while a device sends information, it cannot receive at the time. While that device is talking, it is incapable of also listening for other traffic. This is much like a walkie-talkie. When one person wants to talk, he presses the transmit button and begins speaking. While he is talking, no one else on the same frequency can talk. When the sending person is finished, he releases the transmit button and the frequency is available to others.
When switches are introduced, full-duplex operation is possible. Full-duplex works much like a telephone—you can listen as well as talk at the same time. When a network device is attached directly to the port of a network switch, the two devices may be capable of operating in full-duplex mode. In full-duplex mode, performance can be increased, but
not quite as much as some like to claim. A 100-Mbps Ethernet segment is capable of transmitting 200 Mbps of data, but only 100 Mbps can travel in one direction at a time. Because most data connections are asymmetric (with more data traveling in one direction than the other), the gain is not as great as many claim. However, full-duplex operation does increase the throughput of most applications because the network media is no longer shared. Two devices on a full-duplex connection can send data as soon as it is ready.
Token-passing networks such as Token Ring can also benefit from network switches. In large networks, the delay between turns to transmit may be significant because the token is passed around the network.
Transmission Methods
LAN data transmissions fall into three classifications: unicast, multicast, and broadcast.
In each type of transmission, a single packet is sent to one or more nodes.
In a unicast transmission, a single packet is sent from the source to a destination on a network. First, the source node addresses the packet by using the address of the destination node. The package is then sent onto the network, and finally, the network passes the packet to its destination.
A multicast transmission consists of a single data packet that is copied and sent to a specific subset of nodes on the network. First, the source node addresses the packet by using a multicast address. The packet is then sent into the network, which makes copies of the packet and sends a copy to each node that is part of the multicast address.
A broadcast transmission consists of a single data packet that is copied and sent to all nodes on the network. In these types of transmissions, the source node addresses the packet by using the broadcast address. The packet is then sent on to the network, which makes copies of the packet and sends a copy to every node on the network.
Topologies
LAN topologies define the manner in which network devices are organized. Four common LAN topologies exist: bus, ring, star, and tree. These topologies are logical architectures, but the actual devices need not be physically organized in these configurations. Logical bus and ring topologies, for example, are commonly organized physically as a star. A bus topology is a linear LAN architecture in which transmissions from network stations propagate the length of the medium and are received by all other stations. Of the three
most widely used LAN implementations, Ethernet/IEEE 802.3 networks—including 100BaseT—implement a bus topology, which is illustrated in Figure 2-3.
Figure 2-3 Some Networks Implement a Local Bus Topology

A ring topology is a LAN architecture that consists of a series of devices connected to one another by unidirectional transmission links to form a single closed loop. Both Token Ring/IEEE 802.5 and FDDI networks implement a ring topology. Figure 2-4 depicts a logical ring topology.
Figure 2-4 Some Networks Implement a Logical Ring Topology

A star topology is a LAN architecture in which the endpoints on a network are connected to a common central hub, or switch, by dedicated links. Logical bus and ring topologies are often implemented physically in a star topology, which is illustrated in Figure 2-5.
A tree topology is a LAN architecture that is identical to the bus topology, except that branches with multiple nodes are possible in this case. Figure 2-5 illustrates a logical tree topology.
A Logical Tree Topology Can Contain Multiple Nodes

Devices
Devices commonly used in LANs include repeaters, hubs, LAN extenders, bridges, LAN switches, and routers.


virtual memory

cache memory

CPU



8086 CPU ARCHITECTURE
The microprocessors functions as the CPU in the stored program model of the digital computer. Its job is to generate all system timing signals and synchronize the transfer of data between memory, I/O, and itself. It accomplishes this task via the three-bus system architecture previously discussed.
The microprocessor also has a S/W function. It must recognize, decode, and execute program instructions fetched from the memory unit. This requires an Arithmetic-Logic Unit (ALU) within the CPU to perform arithmetic and logical (AND, OR, NOT, compare, etc) functions.
The 8086 CPU is organized as two separate processors, called the Bus Interface Unit (BIU) and the Execution Unit (EU). The BIU provides H/W functions, including generation of the memory and I/O addresses for the transfer of data between the outside world -outside the CPU, that is- and the EU.
The EU receives program instruction codes and data from the BIU, executes these instructions, and store the results in the general registers. By passing the data back to the BIU, data can also be stored in a memory location or written to an output device. Note that the EU has no connection to the system buses. It receives and outputs all its data thru the BIU.

The only difference between an 8088 microprocessor and an 8086 microprocessor is the BIU. In the 8088, the BIU data bus path is 8 bits wide versus the 8086's 16-bit data bus. Another difference is that the 8088 instruction queue is four bytes long instead of six.
The important point to note, however, is that because the EU is the same for each processor, the programming instructions are exactly the same for each. Programs written for the 8086 can be run on the 8088 without any changes.

Although the 8086/88 still functions as a stored program computer, organization of the CPU into a separate BIU and EU allows the fetch and execute cycles to overlap. To see this, consider what happens when the 8086 or 8088 is first started.
1. The BIU outputs the contents of the instruction pointer register (IP) onto the address bus, causing the selected byte or word to be read into the BIU.

2. Register IP is incremented by 1 to prepare for the next instruction fetch.

3. Once inside the BIU, the instruction is passed to the queue. This is a first-in, first-out storage register sometimes likened to a "pipeline".

4. Assuming that the queue is initially empty, the EU immediately draws this instruction from the queue and begins execution.

5. While the EU is executing this instruction, the BIU proceeds to fetch a new instruction. Depending on the execution time of the first instruction, the BIU may fill the queue with several new instructions before the EU is ready to draw its next instruction.

The BIU is programmed to fetch a new instruction whenever the queue has room for one (with the 8088) or two (with the 8086) additional bytes. The advantage of this pipelined architecture is that the EU can execute instructions almost continually instead of having to wait for the BIU to fetch a new instruction.
There are three conditions that will cause the EU to enter a "wait" mode. The first occurs when an instruction requires access to a memory location not in the queue. The BIU must suspend fetching instructions and output the address of this memory location. After waiting for the memory access, the EU can resume executing instruction codes from the queue (and the BIU can resume filling the queue).

One other condition can cause the BIU to suspend fetching instructions. This occurs during execution of instructions that are slow to execute. For example, the instruction AAM (ASCII Adjust for Multiplication) requires 83 clock cycles to complete. At four cycles per instruction fetch, the queue will be completely filled during the execution of this single instruction. The BIU will thus have to wait for the EU to pull over one or two bytes from the queue before resuming the fetch cycle.
A subtle advantage to the pipelined architecture should be mentioned. Because the next several instructions are usually in the queue, the BIU can access memory at a somewhat "leisurely" pace. This means that slow-mem parts can be used without affecting overall system performance.
PROGRAMING MODEL
As a programmer of the 8086 or 8088 you must become familiar with the various registers in the EU and BIU.

The data group consists of the accumulator and the BX, CX, and DX registers. Note that each can be accessed as a byte or a word. Thus BX refers to the 16-bit base register but BH refers only to the higher 8 bits of this register. The data registers are normally used for storing temporary results that will be acted on by subsequent instructions.
The pointer and index group are all 16-bit registers (you cannot access the low or high bytes alone). These registers are used as memory pointers. Sometimes a pointer reg will be interpreted as pointing to a memory byte and at other times a memory word. As you will see, the 8086/88 always stores words with the high-order byte in the high-order word address.
Register IP could be considered in the previous group, but this register has only one function -to point to the next instruction to be fetched to the BIU. Register IP is physically part of the BIU and not under direct control of the programmer as are the other pointer registers.
Six of the flags are status indicators, reflecting properties of the result of the last arithmetic or logical instructions. The 8086/88 has several instructions that can be used to transfer program control to a new memory location based on the state of the flags.
Three of the flags can be set or reset directly by the programmer and are used to control the operation of the processor. These are TF, IF, and DF.
The final group of registers is called the segment group. These registers are used by the BIU to determine the memory address output by the CPU when it is reading or writing from the memory unit. To fully understand these registers, we must first study the way the 8086/88 divides its memory into segments.
SEGMENTED MEMORY
Even though the 8086 is considered a 16-bit processor, (it has a 16-bit data bus width) its memory is still thought of in bytes. At first this might seem a disadvantage:
Why saddle a 16-bit microprocessor with an 8-bit memory?
Actually, there are a couple of good reasons. First, it allows the processor to work on bytes as well as words. This is especially important with I/O devices such as printers, terminals, and modems, all of which are designed to transfer ASCII-encoded (7- or 8-bit) data.
Second, many of the 8086's (and 8088's) operation codes are single bytes. Other instructions may require anywhere from two to seven bytes. By being able to access individual bytes, these odd-length instructions can be handled.
We have already seen that the 8086/88 has a 20-bit address bus, allowing it to output 210, or 1'048.576, different memory addresses. As you can see, 524.288 words can also be visualized.
As mentioned, the 8086 reads 16 bits from memory by simultaneously reading an odd-addressed byte and an even-addressed byte. For this reason the 8086 organizes its memory into an even-addressed bank and an odd-addressed bank.
With regard to this, you might wonder if all words must begin at an even address. Well, the answer is yes. However, there is a penalty to be paid. The CPU must perform two memory read cycles: one to fetch the low-order byte and a second to fetch the high-order byte. This slows down the processor but is transparent to the programmer.
The last few paragraphs apply only to the 8086. The 8088 with its 8-bit data bus interfaces to the 1 MB of memory as a single bank. When it is necessary to access a word (whether on an even- or an odd-addressed boundary) two memory read (or write) cycles are performed. In effect, the 8088 pays a performance penalty with every word access. Fortunately for the programmer, except for the slightly slower performance of the 8088, there is no difference between the two processors.
MEMORY MAP
Still another view of the 8086/88 memory space could be as 16 64K-byte blocks beginning at hex address 000000h and ending at address 0FFFFFh. This division into 64K-byte blocks is an arbitrary but convenient choice. This is because the most significant hex digit increments by 1 with each additional block. That is, address 20000h is 65.536 bytes higher in memory than address 10000h. Be sure to note that five hex digits are required to represent a memory address.

The diagram is called a memory map. This is because, like a road map, it is a guide showing how the system memory is allocated. This type of information is vital to the programmer, who must know exactly where his or her programs can be safely loaded.
Note that some memory locations are marked reserved and others dedicated. The dedicated locations are used for processing specific system interrupts and the reset function. Intel has also reserved several locations for future H/W and S/W products. If you make use of these memory locations, you risk incompatibility with these future products.
SEGMENT REGISTERS
Within the 1 MB of memory space the 8086/88 defines four 64K-byte memory blocks called the code segment, stack segment, data segment, and extra segment. Each of these blocks of memory is used differently by the processor.
The code segment holds the program instruction codes. The data segment stores data for the program. The extra segment is an extra data segment (often used for shared data). The stack segment is used to store interrupt and subroutine return addresses.
You should realize that the concept of the segmented memory is a unique one. Older-generation microprocessors such as the 8-bit 8086 or Z-80 could access only one 64K-byte segment. This mean that the programs instruction, data and subroutine stack all had to share the same memory. This limited the amount of memory available for the program itself and led to disaster if the stack should happen to overwrite the data or program areas.
The four segment registers (CS, DS, ES, and SS) are used to "point" at location 0 (the base address) of each segment. This is a little "tricky" because the segment registers are only 16 bits wide, but the memory address is 20 bits wide. The BIU takes care of this problem by appending four 0's to the low-order bits of the segment register. In effect, this multiplies the segment register contents by 16.

The point to note is that the beginning segment address is not arbitrary -it must begin at an address divisible by 16. Another way if saying this is that the low-order hex digit must be 0.
Also note that the four segments need not be defined separately. Indeed, it is allowable for all four segments to completely overlap (CS = DS = ES = SS).
Memory locations not defined to be within one of the current segments cannot be accessed by the 8086/88 without first redefining one of the segment registers to include that location. Thus at any given instant a maximum of 256 K (64K * 4) bytes of memory can be utilized. As we will see, the contents of the segment registers can only be specified via S/W. As you might imagine, instructions to load these registers should be among the first given in any 8086/88 program.
LOGICAL AND PHYSICAL ADDRESS
Addresses within a segment can range from address 00000h to address 0FFFFh. This corresponds to the 64K-byte length of the segment. An address within a segment is called an offset or logical address. A logical address gives the displacement from the address base of the segment to the desired location within it, as opposed to its "real" address, which maps directly anywhere into the 1 MB memory space. This "real" address is called the physical address.
What is the difference between the physical and the logical address?
The physical address is 20 bits long and corresponds to the actual binary code output by the BIU on the address bus lines. The logical address is an offset from location 0 of a given segment.

When two segments overlap it is certainly possible for two different logical addresses to map to the same physical address. This can have disastrous results when the data begins to overwrite the subroutine stack area, or vice versa. For this reason you must be very careful when segments are allowed to overlap.
You should also be careful when writing addresses on paper to do so clearly. To specify the logical address XXXX in the stack segment, use the convention SS:XXXX, which is equal to [SS] * 16 + XXXX.
ADVANTAGES OF SEGMENTED MEMORY
Segmented memory can seem confusing at first. What you must remember is that the program op-codes will be fetched from the code segment, while program data variables will be stored in the data and extra segments. Stack operations use registers BP or SP and the stack segment. As we begin writing programs the consequences of these definitions will become clearer.
An immediate advantage of having separate data and code segments is that one program can work on several different sets of data. This is done by reloading register DS to point to the new data. Perhaps the greatest advantage of segmented memory is that programs that reference logical addresses only can be loaded and run anywhere in memory. This is because the logical addresses always range from 00000h to 0FFFFh, independent of the code segment base. Such programs are said to be relocatable, meaning that they will run at any location in memory. The requirements for writing relocatable programs are that no references be made to physical addresses, and no changes to the segment registers are allowed.




DVD Driveand CD drive

Processor

MotherBoard


An experience with PC depends on a combination of the processor, the memory, the motherboard, the graphics and sound components and the hard disk that the PC has.
The first two things to consider when buying a PC are the motherboard and the processor, which are interdependent. A particular processor only goes on a particular kind of motherboard. For each kind of motherboard, there might be different brands available, with different features. It is better to choose processor and then decide brand of motherboard, based on its features.


A.   MOTHERBOARD

The motherboard is the base of a PC—all the components fit on it. It also has a master brain called the chipset which decides what will work and how. The motherboard is chosen based on

i.Processor : The processor sits on a main board called the motherboard, in a particular slot or socket. This slot determines which processor will go on the motherboard. 

ii. Graphics: Onboard graphics Motherboards can also be chosen based on whether they have integrated graphics on them or not. The earlier graphics cards used to be PCI cards (fitting on the PCI slot on the motherboard). Later something called the Accelerated Graphics Port, or AGP was developed especially for graphics cards and made graphics faster. There are motherboards that had graphics capabilities built into them. However, the graphics from these are only good enough for browsing, Word, Excel, etc, not for heavy 3D games or graphics. The Intel 810 chipset come with onboard graphics, and are a real money saver. For serious gaming and graphics, we need the AGP slot to be there on the motherboard, and add a graphics card to it. The Intel 815 chipset based motherboards come with onboard graphics, but also have an AGP slot on them, so we can go with onboard graphics initially and get a good graphics card later. The new P4s go on the Intel 850 chipset based motherboards, which have a slot for RDRAM memory modules, and  an AGP slot.
q Look for the number of slots for add-on cards. Apart from the AGP slot, look for the number of PCI slots on the board for- internal modem, TV tuner/video capture cards and other accessories.
q Look for the number of RAM slots and how much RAM it can take. Some new motherboards have slots to take even 2 GB of RAM.

B. PROCESSOR

The processor, which is the brain of a PC, is often chosen with price as the main criteria, but changing the processor often means changing the motherboard.

The other extreme, when the budget is unlimited, is to scramble for the latest, fastest processor. There will always be a faster one just around the corner but it should not be exceedingly beyond our requirements, say if our applications are simply writing documents in Word, browsing the Net and sending and receiving e-mail. Choose the processor keeping in mind our activities on the PC, but don’t be stingy either.


Both the processor as well as motherboard should be chosen keeping in mind the fact that they’re both very tough to upgrade—because they’re expensive and when we change them, we have to change a lot of things along with them, almost like overhauling the PC.
Types of processors
Pentium and AMD processor are some of the processor options. AMD processors have been around for a long time. The processors that made an impact in recent times are the Duron and the Athlon. Intel has the value option, Celeron and the high-performance processor, P4 , while AMD has Duron for value proposition and Athlon for high-end one. Nowadays , processors are coming in as dual core which is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one.
A dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages; such reduction reduces latency.
Most of the processors are in 32-bit. This is the number of bits that can be processed in parallel. Or the number of bits used to represent a single element in a data format. Future software are going to be available in 64 bit format increasingly.
Realities of bits in Processors
v  A 32 bit CPU can process 32 bits of data at a time. If data has more than 32 bits, processor takes up ‘32’ bits of data first and processes it and then next group of ‘32’ bits of data is taken up for processing
v  Hence a 64-bit CPU performs better than a 32 bit processor
v  64 bit is very useful for 3d animators, game developers, CAD/CAM engineers, automobile manufacturers
v  A 32 bit CPU can access only 4 GB (232 ) of main memory while a 64 bit CPU can address up to 17 billion GB which is more than enough for any present and near future application
v  A 64 bit CPU needs 64 bit OS and 64 bit applications to deliver optimum results. Some 64 bit CPUs allows to run 32 bit applications and OS but it is a point of under utilization.
v  A 64 bit processor doubles the bandwidth with the processor core while dual core gives 2 processor cores inside a single processor. A 64 bit is like fitting a car with a more powerful engine while a dual core is fitting the same with 2 engines which may or may  not be as powerful as the replaced one.
v  The entry level is 915 chipset while others include 925,945 and 955 chipsets.
v  945 and 955 based chipsets support dual core processors (called Pentium D).
Need of upgrade
It is time for upgrade …
  • When we are moving to a newer operating system(OS).   Roughly the memory requirement and the hard disk space requirement doubles as we move to a new version of Windows, for an optimal experience. Intel recommends a P4 for Windows XP.
  • When we go for a new set of applications like working with videos, pictures etc.,
Precautions in selection of motherboard and processor
·         Take a motherboard that has support for DDR SDRAM, as the prices of DDR are very competitive.
·         If we want a basic machine for functions like MS Office, 2D games, etc, look for a motherboard that supports onboard graphics, and which has an AGP slot that will help if we need to add an AGP card at a later date. Also, look for the VIA KM-266/400 Chipset or the nVidia chipset.
·        Mix and match are not possible with processor brands. Dont put an AMD processors on an Intel processor’s motherboard, or vice versa. They are two different entities and need their own motherboards.
·         Don’t go for  motherboard for a P 3 which take on a P4. We won’t get the P4 performance, since the motherboard doesn’t support it. We’ll also get into software compatibility issues.
·         When we get a new processor, a new motherboard, we need to check if the existing power supply is enough. We’ll probably have to go for a new SMPS too.