One of the first things you learned about Frame is that the LMI also serves as a keepalive, or a heartbeat - and if three consecutive LMIs are missed, the line protocol goes down. There's a limitation to LMI as a keepalive, though. The LMI is exchanged only between the DTE and the closest DCE. The LMI is therefore a local keepalive that does not reflect any possible issues on the remote end of the virtual circuit.
Taking the LMI concept to the next logical level, Frame Relay End-To-End Keepalives (FREEK, one of the least-heard Cisco acronyms for some reason) are used to verify that endpoint-to-endpoint communications are functioning properly.
What you have to keep in mind about FREEK is that each and every PVC needs two separate keepalive processes. Remember, with a PVC, there's no guarantee that the path taking through the frame relay cloud to get from R1 to R2 is going to be the same path taken to go back from R2 to R1. One process will be used to send requests for information and handle the responses to these requests; this is the send side. When the send side transmits a keepalive request, a response is expected in a certain number of seconds. If one is not received, an error event is noted. If enough error events are recorded, the VC's keepalive status is marked as down.
The process that responds to the other side's requests is the receive side.
This being Cisco, we've got to have some modes, right? FREEK has four operational modes.
Bidirectional mode enables both the send and receive process enabled on the router, meaning that the router will send requests and process responses (send side) and will also respond to remote requests for information (receive side).
Request mode enables only the send process. The router will send requests and process responses to those requests, but will not answer requests from other routers.
Reply mode enables only the receive process. The router will respond to requests from other routers but will initiate no requests of its own.
Finally, passive reply mode allows the router to respond to requests, but no timers are set and no events are tracked.
Frame Relay End-To-End Keepalive defaults:
Two send or receive errors must be registered in order for the VC to be considered down.
The event window size is three. The event window is the number of events considered by the router when determining the status of the VC. Therefore, using the defaults, two send or receive errors would have to be received within the event window of three events for the VC to be considered down.
The timer mentioned earlier - the amount of time a router waits for a response - is set to 10 seconds
Working with Frame Relay end-to-end keepalives is just one Frame skill you’ll need to pass the CCNP exams – and I wouldn’t be surprised to see them on a CCIE exam. Know the details and you’re on your way to Cisco certification exam success!
Tuesday, December 23, 2008
Cisco CCNA / CCNP Exam Tutorial: Five Debugs You Must Know
To pass the BSCI exam and move one step closer to CCNP certification success, you've got to know how and when to use debug commands to troubleshoot and verify network operations. While you should never practice debug commands on a production network, it's important to get some hands-on experience with them and not rely on "router simulators" and books to learn about them.
When it comes to RIP, "debug ip rip" is the primary debug to use. This debug will show you the contents of the routing update packets, and is vital in diagnosing RIP version mismatches and routing update authentication issues.
You know how to use the variance command to configure unequal-cost load-sharing with IGRP, but IGRP has no topology table that will give you the feasible successor metrics you need. With IGRP, you need to use the "debug ip igrp transactions" command to get these vital metrics.
Several factors are considered by OSPF-enabled routers when it comes to forming adjacencies, including hello and dead timer settings. If an adjacency doesn't form when you think it should, run "debug ip ospf adj". The reason the adjacency isn't forming is usually seen quickly with this command's output.
Let's not ignore Layer Two! If frame relay mappings are not forming according to your configuration, run "debug frame lmi". This debug will allow you to quickly diagnose and correct any LMI mismatches.
When it comes to PPP, it can be very frustrating to try to spot a problem with a password or username. Instead of staring at the configuration for 10 minutes, run "debug ppp negotiation" and send a ping over the link. This command will help you spot the router with the misconfigured username or password, not to mention saving you a lot of time!
Effectively using debugs during your CCNA and CCNP exam study will help you truly understand what's going on "behind the command" - and it will really come in handy on that day when your production network just isn't doing what you (think) you told it to do!
When it comes to RIP, "debug ip rip" is the primary debug to use. This debug will show you the contents of the routing update packets, and is vital in diagnosing RIP version mismatches and routing update authentication issues.
You know how to use the variance command to configure unequal-cost load-sharing with IGRP, but IGRP has no topology table that will give you the feasible successor metrics you need. With IGRP, you need to use the "debug ip igrp transactions" command to get these vital metrics.
Several factors are considered by OSPF-enabled routers when it comes to forming adjacencies, including hello and dead timer settings. If an adjacency doesn't form when you think it should, run "debug ip ospf adj". The reason the adjacency isn't forming is usually seen quickly with this command's output.
Let's not ignore Layer Two! If frame relay mappings are not forming according to your configuration, run "debug frame lmi". This debug will allow you to quickly diagnose and correct any LMI mismatches.
When it comes to PPP, it can be very frustrating to try to spot a problem with a password or username. Instead of staring at the configuration for 10 minutes, run "debug ppp negotiation" and send a ping over the link. This command will help you spot the router with the misconfigured username or password, not to mention saving you a lot of time!
Effectively using debugs during your CCNA and CCNP exam study will help you truly understand what's going on "behind the command" - and it will really come in handy on that day when your production network just isn't doing what you (think) you told it to do!
Cisco CCNA / CCNP Certification Exam Tutorial: Dialer Watch
Dialer Watch is a vital part of your CCNA and CCNP studies, particularly for the BCRAN exam, but it's one of the most misunderstood technologies as well. To help you pass the CCNA and CCNP certification exams, here's a detailed look at Dialer Watch.
Dialer Watch allows you to configure a route or routes as "watched" when the watched route leaves the routing table and there is no other valid route to that specific destination, the ISDN link will come up. In the following example, R1 and R2 are connected by both a Frame Relay cloud over the 172.12.123.0 /24 network and an ISDN cloud using the 172.12.12.0 /24 network. The routers are running OSPF over the Frame cloud, and R1 is advertising its loopback of 1.1.1.1/32 as well as an Ethernet segment, 10.1.1.0/24, via OSPF. R2 has both of these routes in its OSPF table, as shown below.
R2#show ip route ospf
1.0.0.0/32 is subnetted, 1 subnets
O 1.1.1.1 [110/65] via 172.12.123.1, 00:00:07, Serial0
10.0.0.0/24 is subnetted, 1 subnets
O 10.1.1.0 [110/128] via 172.12.123.1, 00:00:08, Serial0
We want R2 to place a call to R1 if either the loopback or Ethernet networks leave R2's routing table, but we don't want to have to depend on interesting traffic. That dictates the use of Dialer Watch.
First, configure the list of watched routes with dialer watch-list. Only one of the watched routes needs to leave the routing table for the ISDN link to come up. In this example, R2 will watch both routes from its OSPF routing table.
Be careful with this command. The entries here need to match exactly the routes and masks being watched. Dialer watch-lists use subnet masks, not wildcard masks.
R2(config)#dialer watch-list 5 ip 10.1.1.0 255.255.255.0
R2(config)#dialer watch-list 5 ip 1.1.1.1 255.255.255.255
Configure the dialer watch-group command on the BRI interface, AND frame map statements for the watched routes. As with dialer-list and dialer-group, the group number referenced in the dialer watch-group command must match the number assigned to the dialer watch-list.
The Dialer Watch configuration will not work without frame map statements for each watched route. I repeat this because this is the step a lot of people leave out.
R2(config)#interface bri0
R2(config-if)#dialer watch-group 5
R2(config-if)# dialer map ip 1.1.1.1 255.255.255.255. name R1 5557777 broadcast
R2(config-if)# dialer map ip 10.1.1.0 255.255.255.0 name R1 5557777 broadcast
To test Dialer Watch, the Serial0 interface on R2 will be shut down. Since we're running OSPF, the route table will be updated almost immediately and the ISDN link should come up right after that.
R2(config)#int s0
R2(config-if)#shut
01:12:47: %OSPF-5-ADJCHG: Process 1, Nbr 1.1.1.1 on Serial0 from FULL to DOWN, N
eighbor Down: Interface down or detached
01:12:47: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
01:12:48: %SYS-5-CONFIG_I: Configured from console by console
01:12:48: %LINEPROTO-5-UPDOWN: Line protocol on Interface BRI0:1, changed state
to up
01:12:49: %LINK-5-CHANGED: Interface Serial0, changed state to administratively
down
01:12:50: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0, changed state
to down
01:12:53: %ISDN-6-CONNECT: Interface BRI0:1 is now connected to 5557777 R1
Within five seconds, the ISDN link is up. show dialer verifies that Dialer Watch is the reason the line was brought up.
R2#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
5557777 2 0 00:00:11 successful
0 incoming call(s) have been screened.
0 incoming call(s) rejected for callback.
BRI0:1 - dialer type = ISDN
Idle timer (120 secs), Fast idle timer (20 secs)
Wait for carrier (30 secs), Re-enable (15 secs)
Dialer state is data link layer up
Dial reason: Dialing on watched route loss
Time until disconnect 108 secs
Connected to 5557777 (R1)
A final note regarding Dialer Watch ... it will not work with RIP, but will with all our other dynamic IGPs (IGRP, EIGRP, OSPF).
Dialer Watch allows you to configure a route or routes as "watched" when the watched route leaves the routing table and there is no other valid route to that specific destination, the ISDN link will come up. In the following example, R1 and R2 are connected by both a Frame Relay cloud over the 172.12.123.0 /24 network and an ISDN cloud using the 172.12.12.0 /24 network. The routers are running OSPF over the Frame cloud, and R1 is advertising its loopback of 1.1.1.1/32 as well as an Ethernet segment, 10.1.1.0/24, via OSPF. R2 has both of these routes in its OSPF table, as shown below.
R2#show ip route ospf
1.0.0.0/32 is subnetted, 1 subnets
O 1.1.1.1 [110/65] via 172.12.123.1, 00:00:07, Serial0
10.0.0.0/24 is subnetted, 1 subnets
O 10.1.1.0 [110/128] via 172.12.123.1, 00:00:08, Serial0
We want R2 to place a call to R1 if either the loopback or Ethernet networks leave R2's routing table, but we don't want to have to depend on interesting traffic. That dictates the use of Dialer Watch.
First, configure the list of watched routes with dialer watch-list. Only one of the watched routes needs to leave the routing table for the ISDN link to come up. In this example, R2 will watch both routes from its OSPF routing table.
Be careful with this command. The entries here need to match exactly the routes and masks being watched. Dialer watch-lists use subnet masks, not wildcard masks.
R2(config)#dialer watch-list 5 ip 10.1.1.0 255.255.255.0
R2(config)#dialer watch-list 5 ip 1.1.1.1 255.255.255.255
Configure the dialer watch-group command on the BRI interface, AND frame map statements for the watched routes. As with dialer-list and dialer-group, the group number referenced in the dialer watch-group command must match the number assigned to the dialer watch-list.
The Dialer Watch configuration will not work without frame map statements for each watched route. I repeat this because this is the step a lot of people leave out.
R2(config)#interface bri0
R2(config-if)#dialer watch-group 5
R2(config-if)# dialer map ip 1.1.1.1 255.255.255.255. name R1 5557777 broadcast
R2(config-if)# dialer map ip 10.1.1.0 255.255.255.0 name R1 5557777 broadcast
To test Dialer Watch, the Serial0 interface on R2 will be shut down. Since we're running OSPF, the route table will be updated almost immediately and the ISDN link should come up right after that.
R2(config)#int s0
R2(config-if)#shut
01:12:47: %OSPF-5-ADJCHG: Process 1, Nbr 1.1.1.1 on Serial0 from FULL to DOWN, N
eighbor Down: Interface down or detached
01:12:47: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
01:12:48: %SYS-5-CONFIG_I: Configured from console by console
01:12:48: %LINEPROTO-5-UPDOWN: Line protocol on Interface BRI0:1, changed state
to up
01:12:49: %LINK-5-CHANGED: Interface Serial0, changed state to administratively
down
01:12:50: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0, changed state
to down
01:12:53: %ISDN-6-CONNECT: Interface BRI0:1 is now connected to 5557777 R1
Within five seconds, the ISDN link is up. show dialer verifies that Dialer Watch is the reason the line was brought up.
R2#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
5557777 2 0 00:00:11 successful
0 incoming call(s) have been screened.
0 incoming call(s) rejected for callback.
BRI0:1 - dialer type = ISDN
Idle timer (120 secs), Fast idle timer (20 secs)
Wait for carrier (30 secs), Re-enable (15 secs)
Dialer state is data link layer up
Dial reason: Dialing on watched route loss
Time until disconnect 108 secs
Connected to 5557777 (R1)
A final note regarding Dialer Watch ... it will not work with RIP, but will with all our other dynamic IGPs (IGRP, EIGRP, OSPF).
Cisco CCNA / CCNP Certification Exam Tutorial: Configuring PPP Callback
You may run into situations where a router in a remote location needs to dial in to a central router, but the toll charges are much higher if the remote router makes the call. This scenario is perfect for PPP Callback, where the callback client places a call to a callback server, authentication takes place, and the server then hangs up on the client! This ensures that the client isn't charged for the call. The server then calls the client back.
In the following example, R2 has been configured as the client and R1 is the callback server. Let's look at both configurations and the unique commands PPP Callback requires.
Client:
username R1 password CCIE
interface BRI0
ip address 172.12.12.2 255.255.255.0
encapsulation ppp
dialer map ip 172.12.12.1 name R1 broadcast 5557777
dialer-group 1
isdn switch-type basic-ni
ppp callback request
ppp authentication chap
Most of that configuration will look familiar to you, but the ppp callback request command might not. This command enables the BRI interface to request the callback.
Simple enough, right? The PPP Callback Server config requires more configuration and an additional map-class as well.
Server:
username R2 password CCIE
interface BRI0
ip address 172.12.12.1 255.255.255.0
encapsulation ppp
dialer callback-secure
dialer map ip 172.12.12.2 name R2 class CALL_R2_BACK broadcast 5558888
dialer-group 1
isdn switch-type basic-ni
ppp callback accept
ppp authentication chap
map-class dialer CALL_R2_BACK
dialer callback-server username
Examining the PPP Callback Server command from the top down...
dialer callback-secure enables security on the callback. If the remote router cannot be authenticated for callback, the incoming call will be disconnected.
The dialer map statement now calls the class CALL_R2_BACK, shown at the bottom of the config excerpt.
ppp callback accept enables PPP callback on this router.
dialer callback-server username tells the callback server that the device referenced in the dialer map statement is a callback client.
The only way to find out if the config works is to test it, so let's send a ping from R2 to R1 and see if the callback takes place.
R2#ping 172.12.12.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.12.12.1, timeout is 2 seconds:
02:45:42: BR0 DDR: Dialing cause ip (s=172.12.12.2, d=172.12.12.1)
02:45:42: BR0 DDR: Attempting to dial 5557777
02:45:42: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
02:45:42: BR0:1 DDR: Callback negotiated - Disconnecting now
02:45:42: BR0:1 DDR: disconnecting call
02:45:42: %ISDN-6-CONNECT: Interface BRI0:1 is now connected to 5557777 R1
02:45:42: %LINK-3-UPDOWN: Interface BRI0:1, changed state to down
02:45:42: DDR: Callback client for R1 5557777 created
02:45:42: BR0:1 DDR: disconnecting call.....
Success rate is 0 percent (0/5)
R2#
02:45:57: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
R2#
02:45:57: BR0:1 DDR: Callback received from R1 5557777
02:45:57: DDR: Freeing callback to R1 5557777
02:45:57: BR0:1 DDR: dialer protocol up
02:45:58: %LINEPROTO-5-UPDOWN: Line protocol on Interface BRI0:1, changed state to up
The callback was successfully negotiated, and the call then disconnected. R1 then called R2 back, and show dialer on R1 confirms the purpose of the call.
R1#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
5558888 2 4 00:00:20 successful
0 incoming call(s) have been screened.
0 incoming call(s) rejected for callback.
BRI0:1 - dialer type = ISDN
Idle timer (120 secs), Fast idle timer (20 secs)
Wait for carrier (30 secs), Re-enable (15 secs)
Dialer state is data link layer up
Dial reason: Callback return call
Time until disconnect 99 secs
Connected to 5558888 (R2)
Pretty cool! PPP Callback isn’t just important for passing your CCNA and CCNP exams – in circumstances such as shown in this example, it can save your organization quite a bit of money!
In the following example, R2 has been configured as the client and R1 is the callback server. Let's look at both configurations and the unique commands PPP Callback requires.
Client:
username R1 password CCIE
interface BRI0
ip address 172.12.12.2 255.255.255.0
encapsulation ppp
dialer map ip 172.12.12.1 name R1 broadcast 5557777
dialer-group 1
isdn switch-type basic-ni
ppp callback request
ppp authentication chap
Most of that configuration will look familiar to you, but the ppp callback request command might not. This command enables the BRI interface to request the callback.
Simple enough, right? The PPP Callback Server config requires more configuration and an additional map-class as well.
Server:
username R2 password CCIE
interface BRI0
ip address 172.12.12.1 255.255.255.0
encapsulation ppp
dialer callback-secure
dialer map ip 172.12.12.2 name R2 class CALL_R2_BACK broadcast 5558888
dialer-group 1
isdn switch-type basic-ni
ppp callback accept
ppp authentication chap
map-class dialer CALL_R2_BACK
dialer callback-server username
Examining the PPP Callback Server command from the top down...
dialer callback-secure enables security on the callback. If the remote router cannot be authenticated for callback, the incoming call will be disconnected.
The dialer map statement now calls the class CALL_R2_BACK, shown at the bottom of the config excerpt.
ppp callback accept enables PPP callback on this router.
dialer callback-server username tells the callback server that the device referenced in the dialer map statement is a callback client.
The only way to find out if the config works is to test it, so let's send a ping from R2 to R1 and see if the callback takes place.
R2#ping 172.12.12.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.12.12.1, timeout is 2 seconds:
02:45:42: BR0 DDR: Dialing cause ip (s=172.12.12.2, d=172.12.12.1)
02:45:42: BR0 DDR: Attempting to dial 5557777
02:45:42: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
02:45:42: BR0:1 DDR: Callback negotiated - Disconnecting now
02:45:42: BR0:1 DDR: disconnecting call
02:45:42: %ISDN-6-CONNECT: Interface BRI0:1 is now connected to 5557777 R1
02:45:42: %LINK-3-UPDOWN: Interface BRI0:1, changed state to down
02:45:42: DDR: Callback client for R1 5557777 created
02:45:42: BR0:1 DDR: disconnecting call.....
Success rate is 0 percent (0/5)
R2#
02:45:57: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
R2#
02:45:57: BR0:1 DDR: Callback received from R1 5557777
02:45:57: DDR: Freeing callback to R1 5557777
02:45:57: BR0:1 DDR: dialer protocol up
02:45:58: %LINEPROTO-5-UPDOWN: Line protocol on Interface BRI0:1, changed state to up
The callback was successfully negotiated, and the call then disconnected. R1 then called R2 back, and show dialer on R1 confirms the purpose of the call.
R1#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
5558888 2 4 00:00:20 successful
0 incoming call(s) have been screened.
0 incoming call(s) rejected for callback.
BRI0:1 - dialer type = ISDN
Idle timer (120 secs), Fast idle timer (20 secs)
Wait for carrier (30 secs), Re-enable (15 secs)
Dialer state is data link layer up
Dial reason: Callback return call
Time until disconnect 99 secs
Connected to 5558888 (R2)
Pretty cool! PPP Callback isn’t just important for passing your CCNA and CCNP exams – in circumstances such as shown in this example, it can save your organization quite a bit of money!
Cisco CCNA / CCNP Certification Exam Tutorial: ISDN And Multilink PPP
ISDN is a huge topic on both your Cisco CCNA and BCRAN CCNP exams. While many ISDN topics seem straightforward, it’s the details that make the difference in the exam room and working with ISDN in production networks. Configuring and troubleshooting multilink PPP is just one of the skills you’ll need to pass both of these demanding exams.
With BRI, we've got two B-channels to carry data, and both of them have a 64-kbps capacity. You might think it would be a good idea to have both channels in operation before one reaches capacity, and it is a great idea Problem is, it's not a default behavior of ISDN. The second b-channel will not begin to carry traffic until the first one reaches capacity.
With Multilink PPP (MLP), a bandwidth capacity can be set that will allow the second b-channel to bear data before the first channel reaches capacity. The configuration for MLP is simple, but often misconfigured. We'll use our good friend IOS Help to verify the measurement this command uses.
Enabling MLP is a three-step process:
Enable PPP on the link
Enable MLP with the command ppp multilink
Define the threshold at which the second b-channel should start carrying data with the dialer load-threshold command.
Let's say you wanted the second b-channel to start carrying data when the first channel reaches 75% of capacity. It would make sense that the command to do so would be dialer load-threshold 75... but it's not.
R1(config)#int bri0
R1(config-if)#ppp multilink
R1(config-if)#dialer load-threshold ?
<1-255> Load threshold to place another call
The dialer load-threshold value is based on 255, not 100. To have this command bring the line up at a certain percentage, multiply that percentage in decimal format by 255. Below, I multiplied 255 by .75 (75%) to arrive at 191.
R1(config-if)#dialer load-threshold 191 ?
either Threshold decision based on max of inbound and outbound traffic
inbound Threshold decision based on inbound traffic only
outbound Threshold decision based on outbound traffic only
R1(config-if)#dialer load-threshold 191 either
As illustrated by IOS Help in the above configuration, dialer load-threshold has additional options as well. You can configure the interface to consider only incoming, outgoing, or all traffic when calculating when to bring the next channel up.
Configuring Multilink PPP is just one of the skills you’ll need to earn your CCNA and pass the CCNP BCRAN exam. Don’t underestimate ISDN on Cisco’s certification exams!
With BRI, we've got two B-channels to carry data, and both of them have a 64-kbps capacity. You might think it would be a good idea to have both channels in operation before one reaches capacity, and it is a great idea Problem is, it's not a default behavior of ISDN. The second b-channel will not begin to carry traffic until the first one reaches capacity.
With Multilink PPP (MLP), a bandwidth capacity can be set that will allow the second b-channel to bear data before the first channel reaches capacity. The configuration for MLP is simple, but often misconfigured. We'll use our good friend IOS Help to verify the measurement this command uses.
Enabling MLP is a three-step process:
Enable PPP on the link
Enable MLP with the command ppp multilink
Define the threshold at which the second b-channel should start carrying data with the dialer load-threshold command.
Let's say you wanted the second b-channel to start carrying data when the first channel reaches 75% of capacity. It would make sense that the command to do so would be dialer load-threshold 75... but it's not.
R1(config)#int bri0
R1(config-if)#ppp multilink
R1(config-if)#dialer load-threshold ?
<1-255> Load threshold to place another call
The dialer load-threshold value is based on 255, not 100. To have this command bring the line up at a certain percentage, multiply that percentage in decimal format by 255. Below, I multiplied 255 by .75 (75%) to arrive at 191.
R1(config-if)#dialer load-threshold 191 ?
either Threshold decision based on max of inbound and outbound traffic
inbound Threshold decision based on inbound traffic only
outbound Threshold decision based on outbound traffic only
R1(config-if)#dialer load-threshold 191 either
As illustrated by IOS Help in the above configuration, dialer load-threshold has additional options as well. You can configure the interface to consider only incoming, outgoing, or all traffic when calculating when to bring the next channel up.
Configuring Multilink PPP is just one of the skills you’ll need to earn your CCNA and pass the CCNP BCRAN exam. Don’t underestimate ISDN on Cisco’s certification exams!
Cisco CCNA / CCNP Certification Exam Review: Protocol Basics
To earn your Cisco CCNA certification and pass the BSCI CCNP exam, you have to know your protocol basics like the back of your hand! To help you review these important concepts, here's a quick look at the basics of RIPv1, RIPv2, IGRP, and EIGRP.
RIPv1: Broadcasts updates every 30 seconds to the address 255.255.255.255. RIPv1 is a classful protocol, and it does not recognize VLSM, nor does it carry subnet masking information in its routing updates. Update contains entire RIP routing table. Uses Bellman-Ford algorithm. Allows equal-cost load-balancing by default. Max hop count is 15. Does not support clear-text or MD5 authentication of routing updates. Updates carry 25 routes maximum.
RIPv2: Multicasts updates every 30 seconds to the address 224.0.0.9. RIPv2 is a classless protocol, allowing the use of subnet masks. Update contains entire RIP routing table. Uses Bellman-Ford algorithm. Allows equal-cost load-balancing by default. Max hop count is 15. Supports clear-text and MD5 authentication of routing updates. Updates carry 25 routes maximum.
IGRP: Broadcasts updates every 90 seconds to the address 255.255.255.255. IGRP is a Cisco-proprietary protocol, and is also a classful protocol and does not recognize subnet masking. Update contains entire routing table. Uses Bellman-Ford algorithm. Equal-cost load-balancing on by default; unequal-cost load-sharing can be used with the variance command. Max hop count is 100.
EIGRP: Multicasts full routing table only when an adjacency is first formed. Multicasts updates only when there is a change in the network topology, and then only advertises the change. Multicasts to 224.0.0.10 and allows the use of subnet masks. Uses DUAL routing algorithm. Unequal-cost load-sharing available with the variance command.
By mastering the basics of these protocols, you're laying the foundation for success in the exam room and when working on production networks. Pay attention to the details and the payoff is "CCNA" and "CCNP" behind your name!
RIPv1: Broadcasts updates every 30 seconds to the address 255.255.255.255. RIPv1 is a classful protocol, and it does not recognize VLSM, nor does it carry subnet masking information in its routing updates. Update contains entire RIP routing table. Uses Bellman-Ford algorithm. Allows equal-cost load-balancing by default. Max hop count is 15. Does not support clear-text or MD5 authentication of routing updates. Updates carry 25 routes maximum.
RIPv2: Multicasts updates every 30 seconds to the address 224.0.0.9. RIPv2 is a classless protocol, allowing the use of subnet masks. Update contains entire RIP routing table. Uses Bellman-Ford algorithm. Allows equal-cost load-balancing by default. Max hop count is 15. Supports clear-text and MD5 authentication of routing updates. Updates carry 25 routes maximum.
IGRP: Broadcasts updates every 90 seconds to the address 255.255.255.255. IGRP is a Cisco-proprietary protocol, and is also a classful protocol and does not recognize subnet masking. Update contains entire routing table. Uses Bellman-Ford algorithm. Equal-cost load-balancing on by default; unequal-cost load-sharing can be used with the variance command. Max hop count is 100.
EIGRP: Multicasts full routing table only when an adjacency is first formed. Multicasts updates only when there is a change in the network topology, and then only advertises the change. Multicasts to 224.0.0.10 and allows the use of subnet masks. Uses DUAL routing algorithm. Unequal-cost load-sharing available with the variance command.
By mastering the basics of these protocols, you're laying the foundation for success in the exam room and when working on production networks. Pay attention to the details and the payoff is "CCNA" and "CCNP" behind your name!
Cisco CCNA / CCNP Certification Exam Lab: Frame Relay Subinterfaces And Split Horizon
Earning your Cisco CCNA and CCNP is a tough proposition, and part of that is the fact that you quickly learn that there’s usually more than one way to do things with Cisco routers – and while that’s generally a good thing, you better know the ins and outs of all options when it comes to test day and working on production networks. Working with Frame Relay subinterfaces and split horizon is just one such situation.
One reason for the use of subinterfaces is to circumvent the rule of split horizon. You recall from your CCNA studies that split horizon dictates that a route cannot be advertised out the same interface upon which it was learned in the first place. In the following example, R1 is the hub and R2 and R3 are the spokes. All three routers are using their physical interfaces for frame relay connectivity, and they are also running RIPv2 172.12.123.0 /24. Each router is also advertising a loopback interface, using the router number for each octet.
R1(config)#int s0
R1(config-if)#ip address 172.12.123.1 255.255.255.0
R1(config-if)#no frame inverse
R1(config-if)#frame map ip 172.12.123.2 122 broadcast
R1(config-if)#frame map ip 172.12.123.3 123 broadcast
R1(config-if)#no shut
R2(config)#int s0
R2(config-if)#encap frame
R2(config-if)#no frame inver
R2(config-if)#frame map ip 172.12.123.1 221 broadcast
R2(config-if)#frame map ip 172.12.123.3 221 broadcast
R2(config-if)#ip address 172.12.123.2 255.255.255.0
R3(config)#int s0
R3(config-if)#encap frame
R3(config-if)#no frame inver
R3(config-if)#frame map ip 172.12.123.1 321 broadcast
R3(config-if)#frame map ip 172.12.123.2 321 broadcast
R3(config-if)#ip address 172.12.123.3 255.255.255.0
R1#show ip route rip
2.0.0.0/32 is subnetted, 1 subnets
R 2.2.2.2 [120/1] via 172.12.123.2, 00:00:20, Serial0
3.0.0.0/32 is subnetted, 1 subnets
R 3.3.3.3 [120/1] via 172.12.123.3, 00:00:22, Serial0
R2#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.123.1, 00:00:06, Serial0
R3#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.123.1, 00:00:04, Serial0
The hub router R1 has a route to both loopbacks, but neither spoke has a route to the other spoke's loopback. That's because split horizon prevents R1 from advertising a network via Serial0 if the route was learned on Serial0 to begin with.
We've got two options here, one of which is to disable spilt horizon on the interface. While doing so will have the desired effect in our little network, disabling split horizon is not a good idea and should be avoided whenever possible. We’re not going to do it in this lab, but here is the syntax to do so:
R1(config)#interface serial0
R1(config-if)#no ip split-horizon
A better solution is to configure subinterfaces on R1. The IP addressing will have to be revisited, but that's no problem here. R1 and R2 will use 172.12.123.0 /24 to communicate, while R1 and R3 will use 172.12.13.0 /24. R3's serial0 interface will need to be renumbered, so let's look at all three router configurations:
R1(config)#interface serial0
R1(config-if)#encap frame
R1(config-if)#no frame inverse-arp
R1(config-if)#no ip address
R1(config-if)#interface serial0.12 multipoint
R1(config-subif)#ip address 172.12.123.1 255.255.255.0
R1(config-subif)#frame map ip 172.12.123.2 122 broadcast
R1(config-subif)#interface serial0.31 point-to-point
R1(config-subif)#ip address 172.12.13.1 255.255.255.0
R1(config-subif)#frame interface-dlci 123
R2(config)#int s0
R2(config-if)#ip address 172.12.123.2 255.255.255.0
R2(config-if)#encap frame
R2(config-if)#frame map ip 172.12.13.3 221 broadcast
R2(config-if)#frame map ip 172.12.123.1 221 broadcast
R3(config)#int s0
R3(config-if)#ip address 172.12.13.3 255.255.255.0
R3(config-if)#encap frame
R3(config-if)#frame map ip 172.12.13.1 321 broadcast
R3(config-if)#frame map ip 172.12.123.2 321 broadcast
A frame map statement always names the REMOTE IP address and the LOCAL DLCI. Don't forget the broadcast option!
Show frame map shows us that all the static mappings on R1 are up and running. Note the "static" output, which indicates these mappings are a result of using the frame map command. Pings are not shown, but all three routers can ping each other at this point.
R1#show frame map
Serial0 (up): ip 172.12.123.2 dlci 122(0x7A,0x1CA0), static,
broadcast, CISCO, status defined, active
Serial0 (up): ip 172.12.13.3 dlci 123(0x7B,0x1CB0), static,
broadcast, CISCO, status defined, active
After the 172.12.13.0 /24 network is added to R1 and R3’s RIP configuration, R2 and R3 now have each other's loopback network in their RIP routing tables.
R2#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.123.1, 00:00:20, Serial0
3.0.0.0/32 is subnetted, 1 subnets
R 3.3.3.3 [120/1] via 172.12.123.1, 00:00:22, Serial0
R3#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.13.1, 00:00:20, Serial0
2.0.0.0/32 is subnetted, 1 subnets
R 2.2.2.2 [120/1] via 172.12.13.1, 00:00:22, Serial0
While turning split horizon off is one way to achieve total IP connectivity, doing so can have other unintended results. The use of subinterfaces is a more effective way of allowing the spokes to see the hub's loopback network.
One reason for the use of subinterfaces is to circumvent the rule of split horizon. You recall from your CCNA studies that split horizon dictates that a route cannot be advertised out the same interface upon which it was learned in the first place. In the following example, R1 is the hub and R2 and R3 are the spokes. All three routers are using their physical interfaces for frame relay connectivity, and they are also running RIPv2 172.12.123.0 /24. Each router is also advertising a loopback interface, using the router number for each octet.
R1(config)#int s0
R1(config-if)#ip address 172.12.123.1 255.255.255.0
R1(config-if)#no frame inverse
R1(config-if)#frame map ip 172.12.123.2 122 broadcast
R1(config-if)#frame map ip 172.12.123.3 123 broadcast
R1(config-if)#no shut
R2(config)#int s0
R2(config-if)#encap frame
R2(config-if)#no frame inver
R2(config-if)#frame map ip 172.12.123.1 221 broadcast
R2(config-if)#frame map ip 172.12.123.3 221 broadcast
R2(config-if)#ip address 172.12.123.2 255.255.255.0
R3(config)#int s0
R3(config-if)#encap frame
R3(config-if)#no frame inver
R3(config-if)#frame map ip 172.12.123.1 321 broadcast
R3(config-if)#frame map ip 172.12.123.2 321 broadcast
R3(config-if)#ip address 172.12.123.3 255.255.255.0
R1#show ip route rip
2.0.0.0/32 is subnetted, 1 subnets
R 2.2.2.2 [120/1] via 172.12.123.2, 00:00:20, Serial0
3.0.0.0/32 is subnetted, 1 subnets
R 3.3.3.3 [120/1] via 172.12.123.3, 00:00:22, Serial0
R2#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.123.1, 00:00:06, Serial0
R3#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.123.1, 00:00:04, Serial0
The hub router R1 has a route to both loopbacks, but neither spoke has a route to the other spoke's loopback. That's because split horizon prevents R1 from advertising a network via Serial0 if the route was learned on Serial0 to begin with.
We've got two options here, one of which is to disable spilt horizon on the interface. While doing so will have the desired effect in our little network, disabling split horizon is not a good idea and should be avoided whenever possible. We’re not going to do it in this lab, but here is the syntax to do so:
R1(config)#interface serial0
R1(config-if)#no ip split-horizon
A better solution is to configure subinterfaces on R1. The IP addressing will have to be revisited, but that's no problem here. R1 and R2 will use 172.12.123.0 /24 to communicate, while R1 and R3 will use 172.12.13.0 /24. R3's serial0 interface will need to be renumbered, so let's look at all three router configurations:
R1(config)#interface serial0
R1(config-if)#encap frame
R1(config-if)#no frame inverse-arp
R1(config-if)#no ip address
R1(config-if)#interface serial0.12 multipoint
R1(config-subif)#ip address 172.12.123.1 255.255.255.0
R1(config-subif)#frame map ip 172.12.123.2 122 broadcast
R1(config-subif)#interface serial0.31 point-to-point
R1(config-subif)#ip address 172.12.13.1 255.255.255.0
R1(config-subif)#frame interface-dlci 123
R2(config)#int s0
R2(config-if)#ip address 172.12.123.2 255.255.255.0
R2(config-if)#encap frame
R2(config-if)#frame map ip 172.12.13.3 221 broadcast
R2(config-if)#frame map ip 172.12.123.1 221 broadcast
R3(config)#int s0
R3(config-if)#ip address 172.12.13.3 255.255.255.0
R3(config-if)#encap frame
R3(config-if)#frame map ip 172.12.13.1 321 broadcast
R3(config-if)#frame map ip 172.12.123.2 321 broadcast
A frame map statement always names the REMOTE IP address and the LOCAL DLCI. Don't forget the broadcast option!
Show frame map shows us that all the static mappings on R1 are up and running. Note the "static" output, which indicates these mappings are a result of using the frame map command. Pings are not shown, but all three routers can ping each other at this point.
R1#show frame map
Serial0 (up): ip 172.12.123.2 dlci 122(0x7A,0x1CA0), static,
broadcast, CISCO, status defined, active
Serial0 (up): ip 172.12.13.3 dlci 123(0x7B,0x1CB0), static,
broadcast, CISCO, status defined, active
After the 172.12.13.0 /24 network is added to R1 and R3’s RIP configuration, R2 and R3 now have each other's loopback network in their RIP routing tables.
R2#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.123.1, 00:00:20, Serial0
3.0.0.0/32 is subnetted, 1 subnets
R 3.3.3.3 [120/1] via 172.12.123.1, 00:00:22, Serial0
R3#show ip route rip
1.0.0.0/32 is subnetted, 1 subnets
R 1.1.1.1 [120/1] via 172.12.13.1, 00:00:20, Serial0
2.0.0.0/32 is subnetted, 1 subnets
R 2.2.2.2 [120/1] via 172.12.13.1, 00:00:22, Serial0
While turning split horizon off is one way to achieve total IP connectivity, doing so can have other unintended results. The use of subinterfaces is a more effective way of allowing the spokes to see the hub's loopback network.
Cisco CCNA / CCNP Certification Exam: Frame Relay BECNs and FECNs
BECNs and FECNs aren't just important to know for your Cisco CCNA and CCNP certification exams - they're an important part of detecting congestion on a Frame Relay network and allowing the network to dynamically adjust its transmission rate when congestion is encountered.
The Forward Explicit Congestion Notification (FECN, pronounced "feckon") bit is set to zero by default, and will be set to 1 if congestion was experienced by the frame in the direction in which the frame was traveling. A DCE (frame relay switch) will set this bit, and a DTE (router) will receive it, and see that congestion was encountered along the frame's path.
If network congestion exists in the opposite direction in which the frame was traveling, the Backward Explicit Congestion Notification (BECN, pronounced "beckon") will be set to 1 by a DCE.
If this is your first time working with BECNs and FECNs, you might wonder why the BECN even exists - after all, why send a "backwards" notification? The BECN is actually the most important part of this entire process, since it's the BECN bit that indicates to the sender that it needs to slow down!
For example, frames sent from Kansas City to Green Bay encounter congestion in the FR cloud. A Frame Switch sets the FECN bit to 1. In order to alert KC that it's sending data too fast, GB will send return frames with the BECN bit set. When KC sees the BECN bit is set to 1, the KC router knows that the congestion occurred when frames were sent from KC to GB.
Frame Relay BECN Adaptive Shaping allows a router to dynamically throttle back on its transmission rate if it receives frames from the remote host with the BECN bit set. In this case, KC sees that the traffic it's sending to GB is encountering congestion, because the traffic coming back from GB has the BECN bit set. If BECN Adaptive Shaping is running on KC, that router will adjust to this congestion by slowing its transmission rate. When the BECNs stop coming in from GB, KC will begin to send at a faster rate.
BECN Adaptive Shaping is configured as follows:
KC(config)#int s0
KC(config-if)#frame-relay adaptive-shaping becn
To see how many frames are coming in and going out with the BECN and FECN bits set, run show frame pvc.
R3#show frame pvc
< some output removed for clarity >
input pkts 306 output pkts 609 in bytes 45566
out bytes 79364 dropped pkts 0 in FECN pkts 0
in BECN pkts 0 out FECN pkts 0 out BECN pkts 0
in DE pkts 0 out DE pkts 0
out bcast pkts 568 out bcast bytes 75128
pvc create time 01:26:27, last time pvc status changed 01:26:27
Just watch the "in"s and "out"s of BECN, FECN, and DE in both the exam room and your production networks!
The Forward Explicit Congestion Notification (FECN, pronounced "feckon") bit is set to zero by default, and will be set to 1 if congestion was experienced by the frame in the direction in which the frame was traveling. A DCE (frame relay switch) will set this bit, and a DTE (router) will receive it, and see that congestion was encountered along the frame's path.
If network congestion exists in the opposite direction in which the frame was traveling, the Backward Explicit Congestion Notification (BECN, pronounced "beckon") will be set to 1 by a DCE.
If this is your first time working with BECNs and FECNs, you might wonder why the BECN even exists - after all, why send a "backwards" notification? The BECN is actually the most important part of this entire process, since it's the BECN bit that indicates to the sender that it needs to slow down!
For example, frames sent from Kansas City to Green Bay encounter congestion in the FR cloud. A Frame Switch sets the FECN bit to 1. In order to alert KC that it's sending data too fast, GB will send return frames with the BECN bit set. When KC sees the BECN bit is set to 1, the KC router knows that the congestion occurred when frames were sent from KC to GB.
Frame Relay BECN Adaptive Shaping allows a router to dynamically throttle back on its transmission rate if it receives frames from the remote host with the BECN bit set. In this case, KC sees that the traffic it's sending to GB is encountering congestion, because the traffic coming back from GB has the BECN bit set. If BECN Adaptive Shaping is running on KC, that router will adjust to this congestion by slowing its transmission rate. When the BECNs stop coming in from GB, KC will begin to send at a faster rate.
BECN Adaptive Shaping is configured as follows:
KC(config)#int s0
KC(config-if)#frame-relay adaptive-shaping becn
To see how many frames are coming in and going out with the BECN and FECN bits set, run show frame pvc.
R3#show frame pvc
< some output removed for clarity >
input pkts 306 output pkts 609 in bytes 45566
out bytes 79364 dropped pkts 0 in FECN pkts 0
in BECN pkts 0 out FECN pkts 0 out BECN pkts 0
in DE pkts 0 out DE pkts 0
out bcast pkts 568 out bcast bytes 75128
pvc create time 01:26:27, last time pvc status changed 01:26:27
Just watch the "in"s and "out"s of BECN, FECN, and DE in both the exam room and your production networks!
Cisco CCNA / CCNP Certification Exam: Creating A Study Plan
Whether you're just starting to think about passing the CCNA or CCNP exams, or you've been on the certification track for a while, you've got to have a plan for success. If you wanted to drive your car from Florida to California, you'd create a plan to get there. You'd get a map and decide how far you wanted to drive per day, and maybe even make some hotel reservations in advance. You certainly wouldn't get in your car, just drive it randomly down the nearest highway, and hope you ended up in California, would you?
Certainly not. Earning your CCNA certification is the same way. It's not enough to just study a few minutes "when you feel like it", or tell yourself that you'll start studying for the exams "when I get such-and-such done". The perfect time to start on the road to Cisco certification is not tomorrow, and it's not next week. It's today.
You're much better off with one hour of solid study than three hours of interrupted, unfocused study. Here are a few ways to go about getting the kind of quality study time that will get you to the CCNA or CCNP (or any Cisco certification, for that matter!).
Schedule your study time, and regard this study time as you would an appointment with a client. If you were to meet a customer at 10:00 to discuss a network install, would you just decide not to show up and watch television instead? Not if you wanted the job. The same goes for your study time. That's an appointment with the most important customer of all - YOU.
Turn your cell, iPod, TV, instant messenger, and all other electronic collars off for the duration of your study time. I know those of us in information technology don't like to say this, but we can actually exist without being in touch with the world for a little while. You may even get to like it! Having uninterrupted study time is key to CCNA and CCNP exam success.
Finally, schedule your exam before you start studying. Contrary to what many people think, "deadline" is not a dirty word. We do our best work when we have a deadline and a schedule to keep. Make out your study schedule, schedule your exam, and get to work just as you would a network project for a customer. The project you're working on is your career and your life, and by following these simple steps you can make it a highly successful project - by passing your CCNA and CCNP exam!
Certainly not. Earning your CCNA certification is the same way. It's not enough to just study a few minutes "when you feel like it", or tell yourself that you'll start studying for the exams "when I get such-and-such done". The perfect time to start on the road to Cisco certification is not tomorrow, and it's not next week. It's today.
You're much better off with one hour of solid study than three hours of interrupted, unfocused study. Here are a few ways to go about getting the kind of quality study time that will get you to the CCNA or CCNP (or any Cisco certification, for that matter!).
Schedule your study time, and regard this study time as you would an appointment with a client. If you were to meet a customer at 10:00 to discuss a network install, would you just decide not to show up and watch television instead? Not if you wanted the job. The same goes for your study time. That's an appointment with the most important customer of all - YOU.
Turn your cell, iPod, TV, instant messenger, and all other electronic collars off for the duration of your study time. I know those of us in information technology don't like to say this, but we can actually exist without being in touch with the world for a little while. You may even get to like it! Having uninterrupted study time is key to CCNA and CCNP exam success.
Finally, schedule your exam before you start studying. Contrary to what many people think, "deadline" is not a dirty word. We do our best work when we have a deadline and a schedule to keep. Make out your study schedule, schedule your exam, and get to work just as you would a network project for a customer. The project you're working on is your career and your life, and by following these simple steps you can make it a highly successful project - by passing your CCNA and CCNP exam!
Cisco CCNA / CCNP Certification Exam: Troubleshooting Direct Serial Connections
A prime topic of your CCNA and CCNP CIT exams will be connecting Cisco routers directly via their Serial interfaces, and while the configuration is straightforward, there are some vital details and show commands you must know in order to pass the exams and configure this successfully in production and home lab networks. Let's take a look at a sample configuration.
Connecting Cisco routers directly via their Serial interfaces works really well once you get it running - and getting such a connection up and running is easy enough. You can use show controller serial x to find out which endpoint is acting as the DCE, and it's the DCE that must be configured with the clockrate command.
R3#show controller serial 1
HD unit 1, idb = 0x11B4DC, driver structure at 0x121868
buffer size 1524 HD unit 1, V.35 DCE cable
R3(config)#int serial1
R3(config-if)#ip address 172.12.13.3 255.255.255.0
R3(config-if)#clockrate 56000
R3(config-if)#no shut
Failure to configure the clockrate has some interesting effects regarding the physical and logical state of the interfaces. Let's remove the clockrate from R3 and see what happens.
R3(config)#int s1
R3(config-if)#no clockrate 56000
R3(config-if)#
18:02:19: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial1, changed state to down
The line protocol doesn't drop immediately, but it does drop. Let's run show interface serial1 to compare the physical and logical interface states.
R3#show int serial1
Serial1 is up, line protocol is down
Physically, the interface is fine, so the physical interface is up. It's only the logical part of the interface - the line protocol - that is down. It's the same situation on R1.
R1#show inter serial1
Serial1 is up, line protocol is down
While a router misconfiguration is the most likely cause of a serial connection issue, that's not the only reason for clocking issues. Cisco's website documentation mentions CSU/DSU misconfiguration, out-of-spec cables, bad patch panel connections, and connecting too many cables together as other reasons for clocking problems. Still, the number one reason for clocking problems in my experience is simply forgetting to configure the clockrate command!
Connecting Cisco routers directly via their Serial interfaces works really well once you get it running - and getting such a connection up and running is easy enough. You can use show controller serial x to find out which endpoint is acting as the DCE, and it's the DCE that must be configured with the clockrate command.
R3#show controller serial 1
HD unit 1, idb = 0x11B4DC, driver structure at 0x121868
buffer size 1524 HD unit 1, V.35 DCE cable
R3(config)#int serial1
R3(config-if)#ip address 172.12.13.3 255.255.255.0
R3(config-if)#clockrate 56000
R3(config-if)#no shut
Failure to configure the clockrate has some interesting effects regarding the physical and logical state of the interfaces. Let's remove the clockrate from R3 and see what happens.
R3(config)#int s1
R3(config-if)#no clockrate 56000
R3(config-if)#
18:02:19: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial1, changed state to down
The line protocol doesn't drop immediately, but it does drop. Let's run show interface serial1 to compare the physical and logical interface states.
R3#show int serial1
Serial1 is up, line protocol is down
Physically, the interface is fine, so the physical interface is up. It's only the logical part of the interface - the line protocol - that is down. It's the same situation on R1.
R1#show inter serial1
Serial1 is up, line protocol is down
While a router misconfiguration is the most likely cause of a serial connection issue, that's not the only reason for clocking issues. Cisco's website documentation mentions CSU/DSU misconfiguration, out-of-spec cables, bad patch panel connections, and connecting too many cables together as other reasons for clocking problems. Still, the number one reason for clocking problems in my experience is simply forgetting to configure the clockrate command!
Cisco CCNA / CCNP Certification Exam: Same Command, Different Results
As a CCNA or CCNP, one thing you've got to get used to is that change is constant. Cisco regularly issues new IOS versions, not to mention the many different kinds of hardware they produce! While it's always nice to have "the latest and the greatest" when it comes to routers, switches, firewalls, etc., we have to be prepared for the fact that not all our clients are going to have that latest and greatest!
For instance, there are still quite a few Catalyst 5000 switches out there humming away, and if you're used to working on IOS-driven switches like the 2950, the same command can have dramatically different results.
Let's say you're going to examine the spanning tree protocol (STP) setup of a new client. You're used to working with newer 2950 switches, and you've always run show span on those switches to display spanning-tree information. Then, you run show span on a Catalyst 5000 - and something like this shows:
switch (enable) show span
Destination : Port 6/1
Admin Source : Port 6/2
Oper Source : Port 6/2
Direction : transmit/receive
Incoming Packets: disabled
Learning : enabled
Multicast : enabled
Filter : -
Status : active
Total local span sessions: 1
What's going on here?
The command show span on a 5000 will not show spanning tree stats - instead, what you're going to see are statistics relating to Switched Port ANalyzer (SPAN). Surprise!
Consider an example where you're used to running show span on 5000 switches to see SPAN information. When you run that on a 2950, you know now what you're going to get - spanning tree information! On a 2950, you'll need to run show monitor session, followed by the SPAN session number.
SW1#show monitor session 1
Session 1
---------
Type : Local Session
Source Ports :
Both : Fa0/1
Destination Ports : Fa0/2
Encapsulation : Native
Ingress: Disabled
As a CCNA and CCNP, this is one of those things you just have to get used to. Commands are going to be different, sometimes radically so, between models. That's why you need to be adept with both IOS Help and Cisco's online documentation site. IOS Help is easy, but the online doc site take a little getting used to. Once you learn how to navigate that site, a world of Cisco knowledge is at your fingertips.
Besides, when you sit for the CCIE lab exam, that will be the only friend you have! And a valuable friend it can be - you're just going to have to trust me on that one. :)
For instance, there are still quite a few Catalyst 5000 switches out there humming away, and if you're used to working on IOS-driven switches like the 2950, the same command can have dramatically different results.
Let's say you're going to examine the spanning tree protocol (STP) setup of a new client. You're used to working with newer 2950 switches, and you've always run show span on those switches to display spanning-tree information. Then, you run show span on a Catalyst 5000 - and something like this shows:
switch (enable) show span
Destination : Port 6/1
Admin Source : Port 6/2
Oper Source : Port 6/2
Direction : transmit/receive
Incoming Packets: disabled
Learning : enabled
Multicast : enabled
Filter : -
Status : active
Total local span sessions: 1
What's going on here?
The command show span on a 5000 will not show spanning tree stats - instead, what you're going to see are statistics relating to Switched Port ANalyzer (SPAN). Surprise!
Consider an example where you're used to running show span on 5000 switches to see SPAN information. When you run that on a 2950, you know now what you're going to get - spanning tree information! On a 2950, you'll need to run show monitor session, followed by the SPAN session number.
SW1#show monitor session 1
Session 1
---------
Type : Local Session
Source Ports :
Both : Fa0/1
Destination Ports : Fa0/2
Encapsulation : Native
Ingress: Disabled
As a CCNA and CCNP, this is one of those things you just have to get used to. Commands are going to be different, sometimes radically so, between models. That's why you need to be adept with both IOS Help and Cisco's online documentation site. IOS Help is easy, but the online doc site take a little getting used to. Once you learn how to navigate that site, a world of Cisco knowledge is at your fingertips.
Besides, when you sit for the CCIE lab exam, that will be the only friend you have! And a valuable friend it can be - you're just going to have to trust me on that one. :)
Cisco CCNA / CCNP Certification Exam: Frame Relay Encapsulation Types
When you're studying to pass the Cisco CCNA and CCNP certification exams, you quickly learn that there's always something else to learn. (You'll really pick up on this in your CCIE studies, trust me!) Today we'll take a look at an often-overlooked topic in Frame Relay, the encapsulation type. You don't exactly change this on a daily basis in production networks (not if you want to stay employed, anyway!), but it's an important exam topic that you must be familiar with.
The DCE and DTE must agree on the LMI type, but there's another value that must be agreed upon by the two DTEs serving as the endpoints of the VC. The Frame encapsulation can be left at the default of Cisco (which is Cisco-proprietary), or it can be changed to the industry-standard IETF, as shown below. If a non-Cisco router is the remote endpoint, IETF encapsulation must be used. Note that the default of Cisco isn't listed as an option by IOS Help, so you better know that one by heart!
R1(config)#int s0
R1(config-if)#encap frame ?
ietf Use RFC1490/RFC2427 encapsulation
R1(config-if)#encap frame ietf
What if a physical interface is in use and some remote hosts require Cisco encapsulation and others require IETF? The encapsulation type can be configured on a per-PVC basis as well. One encap type can be used on the interface, and any map statements that require a different encap type can have that specified in the appropriate map statement. In the following example, all PVCs will use the default Cisco encapsulation type except for PVC 115. The frame map statement using that PVC has ietf specified.
R1(config)#int s0/0
R1(config-if)#encap frame
R1(config-if)#frame map ip 172.12.123.3 123 broadcast
R1(config-if)#frame map ip 172.12.123.2 122 ietf broadcast
show frame map shows us that the mapping to DLCI 123 is using Cisco encapsulation, and DLCI 122 is using IETF.
R1#show frame map
Serial0 (up): ip 172.12.123.3 dlci 123(0x7B,0x1CB0), static
broadcast, CISCO, status defined, active
Serial0 (up): ip 172.12.123.2 dlci 122(0x7B,0x1CB0), static
broadcast, ietf, status defined, active
Just remember that Cisco is the default, and all PVCs will use Cisco unless you specify IETF in the frame map statement itself. You could also change the entire interface to use IETF for all mappings with the frame-relay encapsulation IETF command. For Cisco exams, as well as work on production networks, it's always a good idea to know more than one way to do something!
The DCE and DTE must agree on the LMI type, but there's another value that must be agreed upon by the two DTEs serving as the endpoints of the VC. The Frame encapsulation can be left at the default of Cisco (which is Cisco-proprietary), or it can be changed to the industry-standard IETF, as shown below. If a non-Cisco router is the remote endpoint, IETF encapsulation must be used. Note that the default of Cisco isn't listed as an option by IOS Help, so you better know that one by heart!
R1(config)#int s0
R1(config-if)#encap frame ?
ietf Use RFC1490/RFC2427 encapsulation
R1(config-if)#encap frame ietf
What if a physical interface is in use and some remote hosts require Cisco encapsulation and others require IETF? The encapsulation type can be configured on a per-PVC basis as well. One encap type can be used on the interface, and any map statements that require a different encap type can have that specified in the appropriate map statement. In the following example, all PVCs will use the default Cisco encapsulation type except for PVC 115. The frame map statement using that PVC has ietf specified.
R1(config)#int s0/0
R1(config-if)#encap frame
R1(config-if)#frame map ip 172.12.123.3 123 broadcast
R1(config-if)#frame map ip 172.12.123.2 122 ietf broadcast
show frame map shows us that the mapping to DLCI 123 is using Cisco encapsulation, and DLCI 122 is using IETF.
R1#show frame map
Serial0 (up): ip 172.12.123.3 dlci 123(0x7B,0x1CB0), static
broadcast, CISCO, status defined, active
Serial0 (up): ip 172.12.123.2 dlci 122(0x7B,0x1CB0), static
broadcast, ietf, status defined, active
Just remember that Cisco is the default, and all PVCs will use Cisco unless you specify IETF in the frame map statement itself. You could also change the entire interface to use IETF for all mappings with the frame-relay encapsulation IETF command. For Cisco exams, as well as work on production networks, it's always a good idea to know more than one way to do something!
Cisco CCNA / CCNP Certification Exam: Caller ID Screening And Callback
As a CCNA and/or CCNP candidate, you've got to be able to spot situations where Cisco router features can save your client money and time. For example, if a spoke router is calling a hub router and the toll charges at the spoke site are higher than that of the hub router, having the hub router hang up initially and then call the spoke router back can save the client money (and make you look good!)
A popular method of doing this is using PPP callback, but as we all know, it's a good idea to know more than one way to do things in Cisco World! A lesser-known but still effective method of callback is Caller ID Screening & Callback. Before we look at the callback feature, though, we need to know what Caller ID Screening is in the first place!
This feature is often referred to simply as "Caller ID", which can be a little misleading if you've never seen this service in operation before. To most of us, Caller ID is a phone service that displays the source phone number of an incoming call. Caller ID Screening has a different meaning, though. Caller ID Screening on a Cisco router is really another kind of password - it defines the phone numbers that are allowed to call the router.
The list of acceptable source phone numbers is created with the isdn caller command. Luckily for us, this command allows the use of x to specify a wildcard number. The command isdn caller 555xxxx results in calls being accepted from any 7-digit phone number beginning with 555, and rejected in all other cases. We'll configure R2 to do just that and then send a ping from R1 to R2. To see the results of the Caller ID Screening, debug dialer will be run on R1 before sending the ping. I’ve edited this output, since the output you see here will be repeated fire times – once for each ping packet.
R2(config-if)#isdn caller 555xxxx
R1#debug dialer
Dial on demand events debugging is on
R1#ping 172.12.12.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.12.12.2, timeout is 2 seconds:
03:30:25: BR0 DDR: Dialing cause ip (s=172.12.12.1, d=172.12.12.2)
03:30:25: BR0 DDR: Attempting to dial 8358662.
Success rate is 0 percent (0/5)
R1 doesn't give us any hints as to what the problem is, but we can see that the pings definitely aren't going through. On R2, show dialer displays the number of screened calls.
R2#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
8358661 1 0 00:03:16 successful
7 incoming call(s) have been screened.
0 incoming call(s) rejected for callback.
The callback option mentioned in the last line shown above enables the router to reject a phone call, and then call that router back seconds later.
R2 will now be configured to initially hang up on R1, and then call R1 back.
R2(config-if)#isdn caller 8358661 callback
R1 will now ping R2. The pings aren't returned, but seconds later R2 calls R1 back.
R1#ping 172.12.12.2
Success rate is 0 percent (0/5)
R1#
03:48:12: BRI0: wait for isdn carrier timeout, call id=0x8023
R1#
03:48:18: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
R1#
03:48:18: BR0:1 DDR: dialer protocol up
R1#
03:48:19: %LINEPROTO-5-UPDOWN: Line protocol on Interface BRI0:1, changed state to up
R1#
03:48:24: %ISDN-6-CONNECT: Interface BRI0:1 is now connected to 8358662 R2
show dialer on R2 shows the reason for the call to R1 is a callback return call.
R2#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
8358661 3 0 00:00:48 successful
7 incoming call(s) have been screened.
10 incoming call(s) rejected for callback.
BRI0:1 - dialer type = ISDN
Idle timer (120 secs), Fast idle timer (20 secs)
Wait for carrier (30 secs), Re-enable (15 secs)
Dialer state is data link layer up
Dial reason: Callback return call
Time until disconnect 71 secs
Connected to 8358661 (R1)
The drawback to Caller ID Callback is that not all telco switches support it, so if you have the choice between this and PPP Callback, you're probably better off with PPP Callback. However, it's always a good idea to know more than one way to get things done with Cisco!
A popular method of doing this is using PPP callback, but as we all know, it's a good idea to know more than one way to do things in Cisco World! A lesser-known but still effective method of callback is Caller ID Screening & Callback. Before we look at the callback feature, though, we need to know what Caller ID Screening is in the first place!
This feature is often referred to simply as "Caller ID", which can be a little misleading if you've never seen this service in operation before. To most of us, Caller ID is a phone service that displays the source phone number of an incoming call. Caller ID Screening has a different meaning, though. Caller ID Screening on a Cisco router is really another kind of password - it defines the phone numbers that are allowed to call the router.
The list of acceptable source phone numbers is created with the isdn caller command. Luckily for us, this command allows the use of x to specify a wildcard number. The command isdn caller 555xxxx results in calls being accepted from any 7-digit phone number beginning with 555, and rejected in all other cases. We'll configure R2 to do just that and then send a ping from R1 to R2. To see the results of the Caller ID Screening, debug dialer will be run on R1 before sending the ping. I’ve edited this output, since the output you see here will be repeated fire times – once for each ping packet.
R2(config-if)#isdn caller 555xxxx
R1#debug dialer
Dial on demand events debugging is on
R1#ping 172.12.12.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.12.12.2, timeout is 2 seconds:
03:30:25: BR0 DDR: Dialing cause ip (s=172.12.12.1, d=172.12.12.2)
03:30:25: BR0 DDR: Attempting to dial 8358662.
Success rate is 0 percent (0/5)
R1 doesn't give us any hints as to what the problem is, but we can see that the pings definitely aren't going through. On R2, show dialer displays the number of screened calls.
R2#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
8358661 1 0 00:03:16 successful
7 incoming call(s) have been screened.
0 incoming call(s) rejected for callback.
The callback option mentioned in the last line shown above enables the router to reject a phone call, and then call that router back seconds later.
R2 will now be configured to initially hang up on R1, and then call R1 back.
R2(config-if)#isdn caller 8358661 callback
R1 will now ping R2. The pings aren't returned, but seconds later R2 calls R1 back.
R1#ping 172.12.12.2
Success rate is 0 percent (0/5)
R1#
03:48:12: BRI0: wait for isdn carrier timeout, call id=0x8023
R1#
03:48:18: %LINK-3-UPDOWN: Interface BRI0:1, changed state to up
R1#
03:48:18: BR0:1 DDR: dialer protocol up
R1#
03:48:19: %LINEPROTO-5-UPDOWN: Line protocol on Interface BRI0:1, changed state to up
R1#
03:48:24: %ISDN-6-CONNECT: Interface BRI0:1 is now connected to 8358662 R2
show dialer on R2 shows the reason for the call to R1 is a callback return call.
R2#show dialer
BRI0 - dialer type = ISDN
Dial String Successes Failures Last DNIS Last status
8358661 3 0 00:00:48 successful
7 incoming call(s) have been screened.
10 incoming call(s) rejected for callback.
BRI0:1 - dialer type = ISDN
Idle timer (120 secs), Fast idle timer (20 secs)
Wait for carrier (30 secs), Re-enable (15 secs)
Dialer state is data link layer up
Dial reason: Callback return call
Time until disconnect 71 secs
Connected to 8358661 (R1)
The drawback to Caller ID Callback is that not all telco switches support it, so if you have the choice between this and PPP Callback, you're probably better off with PPP Callback. However, it's always a good idea to know more than one way to get things done with Cisco!
Cisco CCNA / CCNP Certification Exam: Cabling Your Home Lab
More CCNA and CCNP candidates than ever before are putting together their own home labs, and there's no better way to learn about Cisco technologies than working with the real thing. Getting the routers and switches is just part of putting together a great CCNA / CCNP home lab, though. You've got to get the right cables to connect the devices, and this is an important part of your education as well. After all, without the right cables, client networks are going to have a hard time working!
For your Cisco home lab, one important cable is the DTE/DCE cable. These cables have two major uses in a home lab. To practice directly connecting Cisco routers via Serial interfaces (an important CCNA skill), you'll need to connect them with a DTE/DCE cable. Second, if you plan on having a Cisco router act as a frame relay switch in your lab, you'll need multiple DTE/DCE cables to do so. (Visit my website's Home Lab Help section for a sample Frame Relay switch configuration.)
If you have multiple switches in your lab, that's great, because you'll be able to get a lot of spanning tree protocol (STP) work in as well as creating Etherchannels. To connect your switches, you'll need crossover cables.
You'll need some straight-through cables as well to connect your routers to the switches.
Finally, if you're lucky enough to have an access server as part of your lab, you'll need an octal cable to connect your AS to the other routers and switches in your lab. The octal cable has one large connector on one end and eight numbered RJ-45 connectors on the other end. The large connector should be attached to the async port on your AS, and the numbered RJ-45 connectors will be connected to the console ports on your other routers and switches.
Choosing and connecting the right cables for your Cisco CCNA / CCNP home lab is a great learning experience, and it's also an important part of your Cisco education. After all, all great networks and home labs all begin at Layer One of the OSI model!
For your Cisco home lab, one important cable is the DTE/DCE cable. These cables have two major uses in a home lab. To practice directly connecting Cisco routers via Serial interfaces (an important CCNA skill), you'll need to connect them with a DTE/DCE cable. Second, if you plan on having a Cisco router act as a frame relay switch in your lab, you'll need multiple DTE/DCE cables to do so. (Visit my website's Home Lab Help section for a sample Frame Relay switch configuration.)
If you have multiple switches in your lab, that's great, because you'll be able to get a lot of spanning tree protocol (STP) work in as well as creating Etherchannels. To connect your switches, you'll need crossover cables.
You'll need some straight-through cables as well to connect your routers to the switches.
Finally, if you're lucky enough to have an access server as part of your lab, you'll need an octal cable to connect your AS to the other routers and switches in your lab. The octal cable has one large connector on one end and eight numbered RJ-45 connectors on the other end. The large connector should be attached to the async port on your AS, and the numbered RJ-45 connectors will be connected to the console ports on your other routers and switches.
Choosing and connecting the right cables for your Cisco CCNA / CCNP home lab is a great learning experience, and it's also an important part of your Cisco education. After all, all great networks and home labs all begin at Layer One of the OSI model!
Cisco CCNA / CCNP Certification Exam: Attending A Video Boot Camp
When you're studying for the CCNA and CCNP exams, you've got a lot of different choices when it comes to training. One popular choice is choosing one of the many "boot camps" and five-day in-person courses that are out there. I've taught quite a few of these, and while many of them are good, they do have drawbacks.
Of course, one is cost. Many employers are putting the brakes on paying for CCNA and CCNP boot camps, and most candidates can't afford to pay thousands of dollars for such a class. Then you've got travel costs, meals, and having to possibly burn your own vacation time to take the class. Add in time away from your family and boot camps become impractical for many CCNA / CCNP candidates.
Another issue is fatigue. I enjoy teaching week-long classes, but let's face facts - whether you're training for the CCNA or CCNP exams, you're going to get a lot of information thrown at you in just a few days. You're going to be mentally and physically exhausted at the end of the week, and that's when some boot camps actually have you take the exam! You've got to be refreshed and rested when you take the exam to have your best chance of success.
How can you get the benefit of an experienced instructor without paying thousands of dollars? By taking a Video Boot Camp! There are some high-quality computer-based training (CBT) courses out there, and these courses offer quite a few advantages for the CCNA and CCNP candidate. These courses run hundreds instead of thousands of dollars, and you can train on your own schedule. It is important for you to make and keep that schedule, but instead of spending thousands of dollars and having to travel, you can get world-class CCNA and CCNP training in the comfort of your own home.
By combining a high-quality CCNA or CCNP CBT or video boot camp with a strong work ethic, you're on your way to passing the exam and accelerating your career. Now get to work!
Of course, one is cost. Many employers are putting the brakes on paying for CCNA and CCNP boot camps, and most candidates can't afford to pay thousands of dollars for such a class. Then you've got travel costs, meals, and having to possibly burn your own vacation time to take the class. Add in time away from your family and boot camps become impractical for many CCNA / CCNP candidates.
Another issue is fatigue. I enjoy teaching week-long classes, but let's face facts - whether you're training for the CCNA or CCNP exams, you're going to get a lot of information thrown at you in just a few days. You're going to be mentally and physically exhausted at the end of the week, and that's when some boot camps actually have you take the exam! You've got to be refreshed and rested when you take the exam to have your best chance of success.
How can you get the benefit of an experienced instructor without paying thousands of dollars? By taking a Video Boot Camp! There are some high-quality computer-based training (CBT) courses out there, and these courses offer quite a few advantages for the CCNA and CCNP candidate. These courses run hundreds instead of thousands of dollars, and you can train on your own schedule. It is important for you to make and keep that schedule, but instead of spending thousands of dollars and having to travel, you can get world-class CCNA and CCNP training in the comfort of your own home.
By combining a high-quality CCNA or CCNP CBT or video boot camp with a strong work ethic, you're on your way to passing the exam and accelerating your career. Now get to work!
Cisco CCNA / CCNP Certification: How And Why To Build An Etherchannel
CCNA and CCNP candidates are well-versed in Spanning-Tree Protocol, and one of the great things about STP is that it works well with little or no additional configuration. There is one situation where STP works against us just a bit while it prevents switching loops, and that is the situation where two switches have multiple physical connections.
You would think that if you have two separate physical connections between two switches, twice as much data could be sent from one switch to the other than if there was only one connection. STP doesn't allow this by default, however in an effort to prevent switching loops from forming, one of the paths will be blocked.
SW1 and SW2 are connected via two separate physical connections, on ports fast0/11 and fast 0/12. As we can see here on SW1, only port 0/11 is actually forwarding traffic. STP has put the other port into blocking mode (BLK).
SW1#show spanning vlan 10
(some output removed for clarity)
Interface Role Sts Cost Prio.Nbr Type
Fa0/11 Root FWD 19 128.11 P2p
Fa0/12 Altn BLK 19 128.12 P2p
While STP is helping us by preventing switching loops, STP is also hurting us by preventing us from using a perfectly valid path between SW1 and SW2. We could literally double the bandwidth available between the two switches if we could use that path that is currently being blocked.
The secret to using the currently blocked path is configuring an Etherchannel. An Etherchannel is simply a logical bundling of 2 - 8 physical connections between two Cisco switches.
Configuring an Etherchannel is actually quite simple. Use the command "channel-group 1 mode on" on every port you want to be placed into the Etherchannel. Of course, this must be done on both switches if you configure an Etherchannel on one switch and don't do so on the correct ports on the other switch, the line protocol will go down and stay there.
The beauty of an Etherchannel is that STP sees the Etherchannel as one connection. If any of the physical connections inside the Etherchannel go down, STP does not see this, and STP will not recalculate. While traffic flow between the two switches will obviously be slowed, the delay in transmission caused by an STP recalculation is avoided. An Etherchannel also allows us to use multiple physical connections at one time.
Here's how to put these ports into an Etherchannel:
SW1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#interface fast 0/11
SW1(config-if)#channel-group 1 mode on
Creating a port-channel interface Port-channel 1
SW1(config-if)#interface fast 0/12
SW1(config-if)#channel-group 1 mode on
SW2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW2(config)#int fast 0/11
SW2(config-if)#channel-group 1 mode on
SW2(config-if)#int fast 0/12
SW2(config-if)#channel-group 1 mode on
The command "show interface trunk" and "show spanning-tree vlan 10" will be used to verify the Etherchannel configuration.
SW2#show interface trunk (some output removed for clarity)
Port Mode Encapsulation Status Native vlan
Po1 desirable 802.1q trunking 1
SW2#show spanning vlan 10 (some output removed for clarity)
Interface Role Sts Cost Prio.Nbr Type
Po1 Desg FWD 12 128.65 P2p
Before configuring the Etherchannel, we saw individual ports here. Now we see "Po1", which stands for the interface "port-channel1". This is the logical interface created when an Etherchannel is built. We are now using both physical paths between the two switches at one time!
That's one major benefit in action let's see another. Ordinarily, if the single open path between two trunking switches goes down, there is a significant delay while another valid path is opened - close to a minute in some situations. We will now shut down port 0/11 on SW2 and see the effect on the etherchannel.
SW2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW2(config)#int fast 0/11
SW2(config-if)#shutdown
3w0d: %LINK-5-CHANGED: Interface FastEthernet0/11, changed
state to administratively down
SW2#show spanning vlan 10
VLAN0010
Spanning tree enabled protocol ieee
Interface Role Sts Cost Prio.Nbr Type
Po1 Desg FWD 19 128.65 P2p
SW2#show interface trunk
Port Mode Encapsulation Status Native vlan
Po1 desirable 802.1q trunking 1
The Etherchannel did not go down! STP sees the Etherchannel as a single link therefore, as far as STP is concerned, nothing happened.
Building an Etherchannel and knowing how it can benefit your network is an essential skill for CCNA and CCNP success, and it comes in very handy on the job as well. Make sure you are comfortable with building one before taking Cisco's exams!
You would think that if you have two separate physical connections between two switches, twice as much data could be sent from one switch to the other than if there was only one connection. STP doesn't allow this by default, however in an effort to prevent switching loops from forming, one of the paths will be blocked.
SW1 and SW2 are connected via two separate physical connections, on ports fast0/11 and fast 0/12. As we can see here on SW1, only port 0/11 is actually forwarding traffic. STP has put the other port into blocking mode (BLK).
SW1#show spanning vlan 10
(some output removed for clarity)
Interface Role Sts Cost Prio.Nbr Type
Fa0/11 Root FWD 19 128.11 P2p
Fa0/12 Altn BLK 19 128.12 P2p
While STP is helping us by preventing switching loops, STP is also hurting us by preventing us from using a perfectly valid path between SW1 and SW2. We could literally double the bandwidth available between the two switches if we could use that path that is currently being blocked.
The secret to using the currently blocked path is configuring an Etherchannel. An Etherchannel is simply a logical bundling of 2 - 8 physical connections between two Cisco switches.
Configuring an Etherchannel is actually quite simple. Use the command "channel-group 1 mode on" on every port you want to be placed into the Etherchannel. Of course, this must be done on both switches if you configure an Etherchannel on one switch and don't do so on the correct ports on the other switch, the line protocol will go down and stay there.
The beauty of an Etherchannel is that STP sees the Etherchannel as one connection. If any of the physical connections inside the Etherchannel go down, STP does not see this, and STP will not recalculate. While traffic flow between the two switches will obviously be slowed, the delay in transmission caused by an STP recalculation is avoided. An Etherchannel also allows us to use multiple physical connections at one time.
Here's how to put these ports into an Etherchannel:
SW1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#interface fast 0/11
SW1(config-if)#channel-group 1 mode on
Creating a port-channel interface Port-channel 1
SW1(config-if)#interface fast 0/12
SW1(config-if)#channel-group 1 mode on
SW2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW2(config)#int fast 0/11
SW2(config-if)#channel-group 1 mode on
SW2(config-if)#int fast 0/12
SW2(config-if)#channel-group 1 mode on
The command "show interface trunk" and "show spanning-tree vlan 10" will be used to verify the Etherchannel configuration.
SW2#show interface trunk (some output removed for clarity)
Port Mode Encapsulation Status Native vlan
Po1 desirable 802.1q trunking 1
SW2#show spanning vlan 10 (some output removed for clarity)
Interface Role Sts Cost Prio.Nbr Type
Po1 Desg FWD 12 128.65 P2p
Before configuring the Etherchannel, we saw individual ports here. Now we see "Po1", which stands for the interface "port-channel1". This is the logical interface created when an Etherchannel is built. We are now using both physical paths between the two switches at one time!
That's one major benefit in action let's see another. Ordinarily, if the single open path between two trunking switches goes down, there is a significant delay while another valid path is opened - close to a minute in some situations. We will now shut down port 0/11 on SW2 and see the effect on the etherchannel.
SW2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW2(config)#int fast 0/11
SW2(config-if)#shutdown
3w0d: %LINK-5-CHANGED: Interface FastEthernet0/11, changed
state to administratively down
SW2#show spanning vlan 10
VLAN0010
Spanning tree enabled protocol ieee
Interface Role Sts Cost Prio.Nbr Type
Po1 Desg FWD 19 128.65 P2p
SW2#show interface trunk
Port Mode Encapsulation Status Native vlan
Po1 desirable 802.1q trunking 1
The Etherchannel did not go down! STP sees the Etherchannel as a single link therefore, as far as STP is concerned, nothing happened.
Building an Etherchannel and knowing how it can benefit your network is an essential skill for CCNA and CCNP success, and it comes in very handy on the job as well. Make sure you are comfortable with building one before taking Cisco's exams!
Cisco CCNA / CCNP Certification: OSPF E2 vs. E1 Routes
OSPF is a major topic on both the CCNA and CCNP exams, and it's also the topic that requires the most attention to detail. Where dynamic routing protocols such as RIP and IGRP have only one router type, a look at a Cisco routing table shows several different OSPF route types.
R1#show ip route
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
In this tutorial, we'll take a look at the difference between two of these route types, E1 and E2.
Route redistribution is the process of taking routes learned via one routing protocol and injecting those routes into another routing domain. (Static and connected routes can also be redistributed.) When a router running OSPF takes routes learned by another routing protocol and makes them available to the other OSPF-enabled routers it's communicating with, that router becomes an Autonomous System Border Router (ASBR).
Let's work with an example where R1 is running both OSPF and RIP. R4 is in the same OSPF domain as R1, and we want R4 to learn the routes that R1 is learning via RIP. This means we have to perform route redistribution on the ASBR. The routes that are being redistributed from RIP into OSPF will appear as E2 routes on R4:
R4#show ip route ospf
O E2 5.1.1.1 [110/20] via 172.34.34.3, 00:33:21, Ethernet0
6.0.0.0/32 is subnetted, 1 subnets
O E2 6.1.1.1 [110/20] via 172.34.34.3, 00:33:21, Ethernet0
172.12.0.0/16 is variably subnetted, 2 subnets, 2 masks
O E2 172.12.21.0/30 [110/20] via 172.34.34.3, 00:33:32,
Ethernet0
O E2 7.1.1.1 [110/20] via 172.34.34.3, 00:33:21, Ethernet0
15.0.0.0/24 is subnetted, 1 subnets
O E2 15.1.1.0 [110/20] via 172.34.34.3, 00:33:32, Ethernet0
E2 is the default route type for routes learned via redistribution. The key with E2 routes is that the cost of these routes reflects only the cost of the path from the ASBR to the final destination; the cost of the path from R4 to R1 is not reflected in this cost. (Remember that OSPF's metric for a path is referred to as "cost".)
In this example, we want the cost of the routes to reflect the entire path, not just the path between the ASBR and the destination network. To do so, the routes must be redistributed into OSPF as E1 routes on the ASBR, as shown here.
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#router ospf 1
R1(config-router)#redistribute rip subnets metric-type 1
Now on R4, the routes appear as E1 routes and have a larger metric, since the entire path cost is now reflected in the routing table.
O E1 5.1.1.1 [110/94] via 172.34.34.3, 00:33:21, Ethernet0
6.0.0.0/32 is subnetted, 1 subnets
O E1 6.1.1.1 [110/100] via 172.34.34.3, 00:33:21, Ethernet0
172.12.0.0/16 is variably subnetted, 2 subnets, 2 masks
O E1 172.12.21.0/30 [110/94] via 172.34.34.3, 00:33:32, Ethernet0
O E1 7.1.1.1 [110/94] via 172.34.34.3, 00:33:21, Ethernet0
15.0.0.0/24 is subnetted, 1 subnets
O E1 15.1.1.0 [110/94] via 172.34.34.3, 00:33:32, Ethernet0
Knowing the difference between E1 and E2 routes is vital for CCNP exam success, as well as fully understanding a production router's routing table. Good luck in your studies!
R1#show ip route
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
In this tutorial, we'll take a look at the difference between two of these route types, E1 and E2.
Route redistribution is the process of taking routes learned via one routing protocol and injecting those routes into another routing domain. (Static and connected routes can also be redistributed.) When a router running OSPF takes routes learned by another routing protocol and makes them available to the other OSPF-enabled routers it's communicating with, that router becomes an Autonomous System Border Router (ASBR).
Let's work with an example where R1 is running both OSPF and RIP. R4 is in the same OSPF domain as R1, and we want R4 to learn the routes that R1 is learning via RIP. This means we have to perform route redistribution on the ASBR. The routes that are being redistributed from RIP into OSPF will appear as E2 routes on R4:
R4#show ip route ospf
O E2 5.1.1.1 [110/20] via 172.34.34.3, 00:33:21, Ethernet0
6.0.0.0/32 is subnetted, 1 subnets
O E2 6.1.1.1 [110/20] via 172.34.34.3, 00:33:21, Ethernet0
172.12.0.0/16 is variably subnetted, 2 subnets, 2 masks
O E2 172.12.21.0/30 [110/20] via 172.34.34.3, 00:33:32,
Ethernet0
O E2 7.1.1.1 [110/20] via 172.34.34.3, 00:33:21, Ethernet0
15.0.0.0/24 is subnetted, 1 subnets
O E2 15.1.1.0 [110/20] via 172.34.34.3, 00:33:32, Ethernet0
E2 is the default route type for routes learned via redistribution. The key with E2 routes is that the cost of these routes reflects only the cost of the path from the ASBR to the final destination; the cost of the path from R4 to R1 is not reflected in this cost. (Remember that OSPF's metric for a path is referred to as "cost".)
In this example, we want the cost of the routes to reflect the entire path, not just the path between the ASBR and the destination network. To do so, the routes must be redistributed into OSPF as E1 routes on the ASBR, as shown here.
R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)#router ospf 1
R1(config-router)#redistribute rip subnets metric-type 1
Now on R4, the routes appear as E1 routes and have a larger metric, since the entire path cost is now reflected in the routing table.
O E1 5.1.1.1 [110/94] via 172.34.34.3, 00:33:21, Ethernet0
6.0.0.0/32 is subnetted, 1 subnets
O E1 6.1.1.1 [110/100] via 172.34.34.3, 00:33:21, Ethernet0
172.12.0.0/16 is variably subnetted, 2 subnets, 2 masks
O E1 172.12.21.0/30 [110/94] via 172.34.34.3, 00:33:32, Ethernet0
O E1 7.1.1.1 [110/94] via 172.34.34.3, 00:33:21, Ethernet0
15.0.0.0/24 is subnetted, 1 subnets
O E1 15.1.1.0 [110/94] via 172.34.34.3, 00:33:32, Ethernet0
Knowing the difference between E1 and E2 routes is vital for CCNP exam success, as well as fully understanding a production router's routing table. Good luck in your studies!
Category 6 Cable: A Category above the Rest!
Today's bandwidth expectations mean that Category 5 is strategically dead. The Category 5 Enhanced (5e) standards, which should have been ratified in August and may be finalized at November's committee meeting, specify new measurements that provide more margins for 100BaseTX and ATM-155 traffic. Critically, Category 5e standards make reliable Gigabit Ethernet connections possible. But many structured cabling suppliers argue that Category 5e is only an interim solution on the road to Category 6, which will support at least 200 MHz; in the interests of sufficient operating margin, the IEEE is requesting a 250-MHz Category 6 specification. Despite the fact that the Category 6 standards are only at draft stage, manufacturers are offering a host of products and claiming that these products comply with the draft proposals.
What is a category 6 cable? Out of the three cable categories (Cat-5, Cat-5e & Cat-6), Category 6 is the most advanced and provides the best performance. Just like Cat 5 and Cat 5e, Category 6 cable is typically made up of four twisted pairs of copper wire, but its capabilities far exceed those of other cable types because of one particular structural difference: a longitudinal separator. This separator isolates each of the four pairs of twisted wire from the others, which reduces crosstalk, allows for faster data transfer, and gives Category 6 cable twice the bandwidth of Cat 5! Cat 6 cable is ideal for supporting 10 Gigabit Ethernet, and is able to operate at up to 250 MHz. Since technology and standards are constantly evolving, Cat 6 is the wisest choice of cable when taking any possible future updates to your network into consideration. Not only is Category 6 cable future-safe, it is also backward-compatible with any previously-existing Cat 5 and Cat 5e cabling found in older installations.
Category 6, (ANSI/TIA/EIA-568-B.2-1) is a cable standard for Gigabit Ethernet and other network protocols that is backward compatible with the Category 5, category 5e and Category 3 cable standards. Cat-6 features more stringent specifications for crosstalk and system noise. The cable standard is suitable for 10BASE-T / 100BASE-TX and 1000BASE-T (Gigabit Ethernet) and is expected to suit the 10000BASE-T (10Gigabit Ethernet) standards. It provides performance of up to 250 MHz.
The cable contains four twisted copper wire pairs, just like earlier copper cable standards. Although Cat-6 is sometimes made with 23 gauge wire, this is not a requirement; the ANSI/TIA-568-B.2-1 specification states the cable may be made with 22 to 24 AWG gauge wire, so long as the cable meets the specified testing standards. When used as a patch cable, Cat-6 is normally terminated in 8P8C often incorrectly referred to as “RJ-45" electrical connectors. Some Cat-6 cables are too large and may be difficult to attach to 8P8C connectors without a special modular piece and are technically not standard compliant. If components of the various cable standards are intermixed, the performance of the signal path will be limited to that of the lowest category. As with all cables defined by TIA/EIA-568-B, the maximum allowed length of a Cat-6 horizontal cable is 90 meters (295 feet). A complete channel (horizontal cable plus cords on either end) is allowed to be up to 100 meters in length, depending upon the ratio of cord length: horizontal cable length.
The cable is terminated in either the T568A scheme or the T568B scheme. It doesn't make any difference which is used, as they are both straight through (pin 1 to 1, pin 2 to 2, etc). Mixed cable types should not be connected in serial, as the impedance per pair differs and would cause signal degradation. To connect two Ethernet units of the same type (PC to PC, or hub to hub, for example) a cross over cable should be used, though some modern hardware can use either type of cable automatically.
Return loss measures the ratio of reflected-to-transmitted signal strength and is the single most difficult test to repeat with consistent results; at Category 6 levels, the difference between a pass and a fail can be the amount of bend in a test cord. Return loss is also causing headaches for connector manufacturers, because the RJ-45 system isn't up to the job. The final stumbling block with Category 5e ratification concerns the RJ-45 hardware; Category 6 is committed to RJ-45 for backward compatibility, but the ISO's proposed Category 7 system will have a new and as-yet-unspecified connector to accompany its revised cabling. Today, the return loss problem explains why manufacturers of Category 6 hardware, which is supposed to be interoperable, claim Category 6 performance only if you use the manufacturers' matched parts throughout a channel link.
The Telecommunications Industry Association (TIA) is working to complete a new specification that will define enhanced performance standards for unshielded twisted pair cable systems. Draft specification ANSI/TIA/EIA-568-B.2-10 specifies cable systems, called "Augmented Category 6" or more frequently as "Category 6a", that operates at frequencies up to 500 MHz and will provide up to 10 Gbit/s bandwidth. The new specification has limits on alien crosstalk in cabling systems.
Augmented Category 6 specifies cable operating at minimum frequency of 500 MHz, for both shielded and unshielded. It can support future 10 Gb/s applications up to the maximum distance of 100 meters on a 4-connector channel.
What is a category 6 cable? Out of the three cable categories (Cat-5, Cat-5e & Cat-6), Category 6 is the most advanced and provides the best performance. Just like Cat 5 and Cat 5e, Category 6 cable is typically made up of four twisted pairs of copper wire, but its capabilities far exceed those of other cable types because of one particular structural difference: a longitudinal separator. This separator isolates each of the four pairs of twisted wire from the others, which reduces crosstalk, allows for faster data transfer, and gives Category 6 cable twice the bandwidth of Cat 5! Cat 6 cable is ideal for supporting 10 Gigabit Ethernet, and is able to operate at up to 250 MHz. Since technology and standards are constantly evolving, Cat 6 is the wisest choice of cable when taking any possible future updates to your network into consideration. Not only is Category 6 cable future-safe, it is also backward-compatible with any previously-existing Cat 5 and Cat 5e cabling found in older installations.
Category 6, (ANSI/TIA/EIA-568-B.2-1) is a cable standard for Gigabit Ethernet and other network protocols that is backward compatible with the Category 5, category 5e and Category 3 cable standards. Cat-6 features more stringent specifications for crosstalk and system noise. The cable standard is suitable for 10BASE-T / 100BASE-TX and 1000BASE-T (Gigabit Ethernet) and is expected to suit the 10000BASE-T (10Gigabit Ethernet) standards. It provides performance of up to 250 MHz.
The cable contains four twisted copper wire pairs, just like earlier copper cable standards. Although Cat-6 is sometimes made with 23 gauge wire, this is not a requirement; the ANSI/TIA-568-B.2-1 specification states the cable may be made with 22 to 24 AWG gauge wire, so long as the cable meets the specified testing standards. When used as a patch cable, Cat-6 is normally terminated in 8P8C often incorrectly referred to as “RJ-45" electrical connectors. Some Cat-6 cables are too large and may be difficult to attach to 8P8C connectors without a special modular piece and are technically not standard compliant. If components of the various cable standards are intermixed, the performance of the signal path will be limited to that of the lowest category. As with all cables defined by TIA/EIA-568-B, the maximum allowed length of a Cat-6 horizontal cable is 90 meters (295 feet). A complete channel (horizontal cable plus cords on either end) is allowed to be up to 100 meters in length, depending upon the ratio of cord length: horizontal cable length.
The cable is terminated in either the T568A scheme or the T568B scheme. It doesn't make any difference which is used, as they are both straight through (pin 1 to 1, pin 2 to 2, etc). Mixed cable types should not be connected in serial, as the impedance per pair differs and would cause signal degradation. To connect two Ethernet units of the same type (PC to PC, or hub to hub, for example) a cross over cable should be used, though some modern hardware can use either type of cable automatically.
Return loss measures the ratio of reflected-to-transmitted signal strength and is the single most difficult test to repeat with consistent results; at Category 6 levels, the difference between a pass and a fail can be the amount of bend in a test cord. Return loss is also causing headaches for connector manufacturers, because the RJ-45 system isn't up to the job. The final stumbling block with Category 5e ratification concerns the RJ-45 hardware; Category 6 is committed to RJ-45 for backward compatibility, but the ISO's proposed Category 7 system will have a new and as-yet-unspecified connector to accompany its revised cabling. Today, the return loss problem explains why manufacturers of Category 6 hardware, which is supposed to be interoperable, claim Category 6 performance only if you use the manufacturers' matched parts throughout a channel link.
The Telecommunications Industry Association (TIA) is working to complete a new specification that will define enhanced performance standards for unshielded twisted pair cable systems. Draft specification ANSI/TIA/EIA-568-B.2-10 specifies cable systems, called "Augmented Category 6" or more frequently as "Category 6a", that operates at frequencies up to 500 MHz and will provide up to 10 Gbit/s bandwidth. The new specification has limits on alien crosstalk in cabling systems.
Augmented Category 6 specifies cable operating at minimum frequency of 500 MHz, for both shielded and unshielded. It can support future 10 Gb/s applications up to the maximum distance of 100 meters on a 4-connector channel.
Cabling your home for computer network - still a requirement?
Cabling your home for computer network - still a requirement?
With proliferation of wireless networking and communication equipment it is oh-so-tempting to cut the cord and save a significant sum of money in the process. But is everything that a regular computer networking user needs can be done using just wireless network? Let’s take a look at some pros and contras:
1. One important advantage of having a cabled network is the available bandwidth or simply speed. At the present point in time the speed of connection via a simple and inexpensive CAT5E cable can be 1000Mbit/sec, whereas the best that IEEE802.11g (one of the many flavors of Wi-Fi) can offer is only 54Mbit/sec. It may not seem so significant if you think you are only browsing Internet, and the DSL speed available to you is 1.5Mbit/sec. However, if you need to print via your network connection on a remote printer, you should realize that the print jobs, depending on the amount of graphic data in them, can easily reach dozens and even hundreds megabytes. Since 1Byte=8bit one 100MByte print job will take 15 seconds (and in reality this time can be much longer) to transmit via a Wi-Fi wireless connection, and this time shrinks to mere 1 sec or less on wired 1000MBit/s Ethernet connection. Same principal applies to transferring files, backing up files on other computers in the network etc.
2. It is not possible today and with all probability will not be possible in the future to transmit power needed for your networking device via the wireless link. Unless, of course, you would be willing to be subjected to very high levels of microwave radiation. Thus a device that was marketed to you as “un-tethered” will in fact be very much tethered via the power cord or will have to be re-charged every so often. The power requirements are increasingly important for devices that are expected to be always online, such as phone sets. Therefore it is best to have it connected via a cable that can deliver both power and the communication signal at the same time.
3. Wireless communications are very much proprietary and require whole gamut of conversion equipment to transmit multi-media signals. The same CAT5E cable can without any modification support phone, computer network, balanced line level audio signal, baseband video signal as well as host of other, more specialized, control applications’ signals. With inexpensive adapters called “baluns” the same cable can carry significant number of channels of broadband television or carry a baseband video, such as security camera output, through great distances. All of those applications, except the computer network of course, will require specialized expensive conversion equipment if they needed to be transmitted via a Wi-Fi link.
4. The cost benefit of not running wires around the house is not as simple as issue as it seems. Having installed a wireless network at home you have only eliminated the need to wire for a single application – computer network. A modern home, however, requires all kinds of wiring to run even without regard to computers. The power and phones are obvious examples, as well as thermostats and security systems. Pre-wired speakers are common and most homes today have intercom systems as a desirable option, and those also require extensive wiring. It is very likely that the same contractor running the intercom or security cables is qualified to run computer cables – CAT5E or better. If you are building a home, you should definitely check if computer cabling option is available in your new home, and our advice is to go ahead and purchase it before the walls close. It is going to be a pretty involved and expensive procedure to install the cables later. As an added cost benefit of a wired computer network you will find that all modern computers ship with wired Ethernet network interface card included, and the latest models ship with 1000MBit/sec cards that are essentially free for the computer’s owner.
There are multiple sources of information available on proper planning and design of a residential cabling for voice, data, audio, video and other applications. One of the best sources is the TIA/EIA-570B standard, most resent release of which has been published in 2004. The standard outlines recommended types of the cables, principals of cable distribution in a single- and multi-dwelling units as well as recommended amount of cables to be installed based on the size of the house.
In conclusion, cutting the wire seems like a step forward, some sort of liberation of computer from the bonds of the infrastructure. I would caution the reader, however, to take a more balanced and informed approach before joining the wireless revolution. There are still (and will remain in foreseen future) sound reasons to include properly designed cabling system into the list of your dream home options.
With proliferation of wireless networking and communication equipment it is oh-so-tempting to cut the cord and save a significant sum of money in the process. But is everything that a regular computer networking user needs can be done using just wireless network? Let’s take a look at some pros and contras:
1. One important advantage of having a cabled network is the available bandwidth or simply speed. At the present point in time the speed of connection via a simple and inexpensive CAT5E cable can be 1000Mbit/sec, whereas the best that IEEE802.11g (one of the many flavors of Wi-Fi) can offer is only 54Mbit/sec. It may not seem so significant if you think you are only browsing Internet, and the DSL speed available to you is 1.5Mbit/sec. However, if you need to print via your network connection on a remote printer, you should realize that the print jobs, depending on the amount of graphic data in them, can easily reach dozens and even hundreds megabytes. Since 1Byte=8bit one 100MByte print job will take 15 seconds (and in reality this time can be much longer) to transmit via a Wi-Fi wireless connection, and this time shrinks to mere 1 sec or less on wired 1000MBit/s Ethernet connection. Same principal applies to transferring files, backing up files on other computers in the network etc.
2. It is not possible today and with all probability will not be possible in the future to transmit power needed for your networking device via the wireless link. Unless, of course, you would be willing to be subjected to very high levels of microwave radiation. Thus a device that was marketed to you as “un-tethered” will in fact be very much tethered via the power cord or will have to be re-charged every so often. The power requirements are increasingly important for devices that are expected to be always online, such as phone sets. Therefore it is best to have it connected via a cable that can deliver both power and the communication signal at the same time.
3. Wireless communications are very much proprietary and require whole gamut of conversion equipment to transmit multi-media signals. The same CAT5E cable can without any modification support phone, computer network, balanced line level audio signal, baseband video signal as well as host of other, more specialized, control applications’ signals. With inexpensive adapters called “baluns” the same cable can carry significant number of channels of broadband television or carry a baseband video, such as security camera output, through great distances. All of those applications, except the computer network of course, will require specialized expensive conversion equipment if they needed to be transmitted via a Wi-Fi link.
4. The cost benefit of not running wires around the house is not as simple as issue as it seems. Having installed a wireless network at home you have only eliminated the need to wire for a single application – computer network. A modern home, however, requires all kinds of wiring to run even without regard to computers. The power and phones are obvious examples, as well as thermostats and security systems. Pre-wired speakers are common and most homes today have intercom systems as a desirable option, and those also require extensive wiring. It is very likely that the same contractor running the intercom or security cables is qualified to run computer cables – CAT5E or better. If you are building a home, you should definitely check if computer cabling option is available in your new home, and our advice is to go ahead and purchase it before the walls close. It is going to be a pretty involved and expensive procedure to install the cables later. As an added cost benefit of a wired computer network you will find that all modern computers ship with wired Ethernet network interface card included, and the latest models ship with 1000MBit/sec cards that are essentially free for the computer’s owner.
There are multiple sources of information available on proper planning and design of a residential cabling for voice, data, audio, video and other applications. One of the best sources is the TIA/EIA-570B standard, most resent release of which has been published in 2004. The standard outlines recommended types of the cables, principals of cable distribution in a single- and multi-dwelling units as well as recommended amount of cables to be installed based on the size of the house.
In conclusion, cutting the wire seems like a step forward, some sort of liberation of computer from the bonds of the infrastructure. I would caution the reader, however, to take a more balanced and informed approach before joining the wireless revolution. There are still (and will remain in foreseen future) sound reasons to include properly designed cabling system into the list of your dream home options.
An Overview of Mobile Wireless Computing
Being able to work while traveling is essential for every professional these days. That’s why laptops have become an essential item. Using the internet on a laptop is also important if one needs to keep in touch with work. Getting wireless internet for your laptop is therefore essential if you are a traveling professional. It is important that if you do so, get a wireless internet capability for your laptop, to get the best possible deal out there and maximize the potential of your computer. Due to the mobility of the laptop, computer users need not to be limited by wires when travelling so wireless internet is very handy. Wireless internet these days is becoming better and faster and almost a necessity for every laptop.
Having wireless internet on your laptop allows the user to conduct their business in a timely and easy manner. Having a wifi internet connection on the laptop for example, makes conducting business much easier. It is becoming increasingly popular as well. Being able to connect to the internet while traveling allows the user to check emails constantly and therefore keep contacts, to have access to information like checking figures, and enhance their productivity. To be able to connect to the internet using a laptop, the computer must have the wireless capability to connect to a router. It is imperative that the laptop must have the right network card with a WiFi connector. There are many public spaces that allow for WiFi connections in every western city. The quality of the connection differs from place to place since it depends on a variety of factors. In general, the quality of the connection depends on the quality of the wireless signal that your laptop receives. Being closer or further from the source of the signal in the end will determine the strength and therefore the quality and possibly the speed of your connection. A WiFi network allows for constant connectivity in all hours of the day and every day of the week.
Currently, computer and consequently laptop manufactures are investing time and money in enhancing their products networking capabilities and speeds. In the past few years, wireless connectivity has come a long way in terms of quality and strength. When the portable computers were introduced a few years back the notion of the mobile network already existed. Nevertheless, it was not perfected and had many flaws. Over the years however, wireless capabilities have expanded, they have been bettered allowing for greater connectivity, stronger networks, and higher productivity. As a result, and due to the constant technological advancements, laptop users these days can buy the computer and never have to worry about finding a modem, or a router or those inconvenient cables. Just turn on the computer and start surfing the web. Due to the increasing popularity o wireless internet public spaces have been accommodating for this new trend. Public libraries, airports, or even specific businesses provide wireless internet for their customers. In many city centres in North America, there are so many wireless networks functioning at the same place that it is virtually impossible not to find an internet connection to log on to. The only down side to wireless computing is that it might be associated with health risks that we are not yet aware of. Overall, however wireless computing is the way to go for the business professional, the student or every other avid computer user.
Having wireless internet on your laptop allows the user to conduct their business in a timely and easy manner. Having a wifi internet connection on the laptop for example, makes conducting business much easier. It is becoming increasingly popular as well. Being able to connect to the internet while traveling allows the user to check emails constantly and therefore keep contacts, to have access to information like checking figures, and enhance their productivity. To be able to connect to the internet using a laptop, the computer must have the wireless capability to connect to a router. It is imperative that the laptop must have the right network card with a WiFi connector. There are many public spaces that allow for WiFi connections in every western city. The quality of the connection differs from place to place since it depends on a variety of factors. In general, the quality of the connection depends on the quality of the wireless signal that your laptop receives. Being closer or further from the source of the signal in the end will determine the strength and therefore the quality and possibly the speed of your connection. A WiFi network allows for constant connectivity in all hours of the day and every day of the week.
Currently, computer and consequently laptop manufactures are investing time and money in enhancing their products networking capabilities and speeds. In the past few years, wireless connectivity has come a long way in terms of quality and strength. When the portable computers were introduced a few years back the notion of the mobile network already existed. Nevertheless, it was not perfected and had many flaws. Over the years however, wireless capabilities have expanded, they have been bettered allowing for greater connectivity, stronger networks, and higher productivity. As a result, and due to the constant technological advancements, laptop users these days can buy the computer and never have to worry about finding a modem, or a router or those inconvenient cables. Just turn on the computer and start surfing the web. Due to the increasing popularity o wireless internet public spaces have been accommodating for this new trend. Public libraries, airports, or even specific businesses provide wireless internet for their customers. In many city centres in North America, there are so many wireless networks functioning at the same place that it is virtually impossible not to find an internet connection to log on to. The only down side to wireless computing is that it might be associated with health risks that we are not yet aware of. Overall, however wireless computing is the way to go for the business professional, the student or every other avid computer user.
Labels:
mobile computing,
wifi,
wireless,
wireless laptop,
wireless networks
5 Steps to Securing Your Windows XP Home Computer
Most people are aware that there are continuous security issues with Microsoft’s Windows operating system and other programs. However, what most people do not realize is how easy it is to significantly improve your computer’s security and reduce the likelihood of becoming a victim to ever increasingly sophisticated threats that lurk on the internet. These steps should take less than a couple of hours to complete and should not clean out your wallet.
1) Windows Update – the first crucial step you need to take to make sure that all your Microsoft applications have all the latest product updates installed. These updates or “patches” address security vulnerabilities and other issues. Microsoft usually issues these updates on a monthly cycle. Visit the Microsoft website or switch on automatic updates from the Windows Control panel. Even if your “new” computer is second hand this is still a critical first step. If you buy a used computer with Windows XP make sure Service Pack 2 or SP2 is installed.
2) Strong Passwords - people often overlook this but having well thought through passwords is an important element of your computer security. A strong password should include at least 8 characters with a mixture of text, symbols and numbers. As a minimum you need to make sure the services most at risk have a strong log-in password. These services include your bank, credit card, other financial services like PayPal, your email address and any other services like Ebay which hackers can use to generate profit.
3) Anti Virus Protection – while it is fair to say the threat of the computer virus has receded during the last couple of years they can still inflict serious damage on your computer. Part of the reason why the threat has reduced is because PC manufactures are now more frequently bundling anti virus packages with their new computers. For example last year my new Dell shipped with a 90-day trial of McAfee's Internet Security Suite. The best bet here is to purchase a security package which includes firewall and anti virus software as a minimum. Top brands include McAfee and Symantec Norton products. However, Microsoft has recently entered the market with their "OneCare" offering which is very aggressively priced.
4) Firewall - if you are using a broadband connection then a firewall is definite requirement to manage the traffic flowing between your computer and the internet. A firewall monitors the inbound internet traffic passing through the ports of your computer. Better products also monitor outbound traffic from your computer to the internet. As per above the best bet here is buy a firewall application as part of a security package which most vendors offer as standard. If a hardware firewall is included as part of your router package then you do not need anything else. A company called Zone Labs offer a great free firewall product called Zone Alarm which should be used as a minimum. Windows XP does now ship with a free firewall but the product does not monitor outbound communication and therefore I believe does not offer adequate protection.
5) Anti Spyware Tool - this software is the last piece in your basic internet security set up. This tool helps combat spyware and adware. There is a good mixture of free and paid versions on offer. Good free software include Microsoft's Windows Defender, Spybot S&D or Ewido Anti-Malware. Ewido Anti-Malware is frequently recommended in computer help forums. Be careful if you decide to purchase a solution. There are a number of rogue vendors out there which aggressively push products which offer you little value. Stick to trusted names like Webroot's Spy Sweeper or PC Tool's Spyware Doctor. These products always come out well on independent tests.
1) Windows Update – the first crucial step you need to take to make sure that all your Microsoft applications have all the latest product updates installed. These updates or “patches” address security vulnerabilities and other issues. Microsoft usually issues these updates on a monthly cycle. Visit the Microsoft website or switch on automatic updates from the Windows Control panel. Even if your “new” computer is second hand this is still a critical first step. If you buy a used computer with Windows XP make sure Service Pack 2 or SP2 is installed.
2) Strong Passwords - people often overlook this but having well thought through passwords is an important element of your computer security. A strong password should include at least 8 characters with a mixture of text, symbols and numbers. As a minimum you need to make sure the services most at risk have a strong log-in password. These services include your bank, credit card, other financial services like PayPal, your email address and any other services like Ebay which hackers can use to generate profit.
3) Anti Virus Protection – while it is fair to say the threat of the computer virus has receded during the last couple of years they can still inflict serious damage on your computer. Part of the reason why the threat has reduced is because PC manufactures are now more frequently bundling anti virus packages with their new computers. For example last year my new Dell shipped with a 90-day trial of McAfee's Internet Security Suite. The best bet here is to purchase a security package which includes firewall and anti virus software as a minimum. Top brands include McAfee and Symantec Norton products. However, Microsoft has recently entered the market with their "OneCare" offering which is very aggressively priced.
4) Firewall - if you are using a broadband connection then a firewall is definite requirement to manage the traffic flowing between your computer and the internet. A firewall monitors the inbound internet traffic passing through the ports of your computer. Better products also monitor outbound traffic from your computer to the internet. As per above the best bet here is buy a firewall application as part of a security package which most vendors offer as standard. If a hardware firewall is included as part of your router package then you do not need anything else. A company called Zone Labs offer a great free firewall product called Zone Alarm which should be used as a minimum. Windows XP does now ship with a free firewall but the product does not monitor outbound communication and therefore I believe does not offer adequate protection.
5) Anti Spyware Tool - this software is the last piece in your basic internet security set up. This tool helps combat spyware and adware. There is a good mixture of free and paid versions on offer. Good free software include Microsoft's Windows Defender, Spybot S&D or Ewido Anti-Malware. Ewido Anti-Malware is frequently recommended in computer help forums. Be careful if you decide to purchase a solution. There are a number of rogue vendors out there which aggressively push products which offer you little value. Stick to trusted names like Webroot's Spy Sweeper or PC Tool's Spyware Doctor. These products always come out well on independent tests.
Labels:
Anti Virus,
computer,
Firewall,
Security,
Spyware
Subscribe to:
Posts (Atom)