0% found this document useful (0 votes)
27 views1,425 pages

Hcip Transmission v20 Training Material Compress

The document provides an overview of Huawei's NG WDM products, detailing the capabilities and applications of the OptiX OSN 1800, 8800, and 9800 series in optical transmission networks. It highlights the architecture, features, and technologies such as ROADM and ASON that enhance network management and service flexibility. Additionally, it includes links to Huawei's learning resources and certification programs for further training and information.

Uploaded by

I Made W
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views1,425 pages

Hcip Transmission v20 Training Material Compress

The document provides an overview of Huawei's NG WDM products, detailing the capabilities and applications of the OptiX OSN 1800, 8800, and 9800 series in optical transmission networks. It highlights the architecture, features, and technologies such as ROADM and ASON that enhance network management and service flexibility. Additionally, it includes links to Huawei's learning resources and certification programs for further training and information.

Uploaded by

I Made W
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1425

Recommendations

 Huawei Learning Website


 http://learning.huawei.com/en

 Huawei e-Learning
 https://ilearningx.huawei.com/portal/#/portal/ebg/51

 Huawei Certification
 http://support.huawei.com/learning/NavigationAction!createNavi?navId=_31
&lang=en

 Find Training
 http://support.huawei.com/learning/NavigationAction!createNavi?navId=_trai
ningsearch&lang=en

More Information
 Huawei learning APP

版权所有© 2018 华为技术有限公司


 Huawei's NG WDM products provide full coverage from the CWDM/PON at the
access layer to the DWDM at the large-capacity backbone layer. The NG WDM
adopts a unified control plane design that supports hybrid networking and board
sharing, this helps reduce the OPEX and CAPEX.

 OptiX OSN 1800/8800/9800 are called next generation wavelength division


multiplexing equipment.

 OptiX OSN 1800 series include the OSN 1800 I/II/OADM/V chassis. Multiple
chassis can be stacked to expand the number of access services. OptiX OSN
1800 series are located at the edge layer of the metropolitan area network
(MAN) and support almost all services of the 1.5Mbit/s~100Gbit/s.

 OptiX OSN 8800 series include the OSN 8800 T16/T32/T64 subrack, OSN
8800 platform subrack, and OSN 8800 UPS. The equipment integrates the
WDM large-capacity transmission (single wavelength 10G/40G/100G), multi-
plane cross-connect capability (ODUk, VC, and PKT), flexible electrical-layer
grooming, PID, ASON, rich OAM management, and optical/electrical layer
protection.

 OptiX OSN 9800 series include OSN 9800 U64/U32/U16, OSN 9800 P18, and
OSN 9800 UPS. OSN 9800 U64/U32 subracks are used at the electrical layer.
OSN 9800 U64/U32 subracks are used together with OptiX OSN 9800
UPS/OSN 8800 platform subrack/OSN 8800 UPS/OSN 8800 T16 to
implement WDM/OTN system applications.
 Based on the service access capacity and service grooming granularity, the optical
transmission network is classified into the following types: Access layer,
aggregation layer, and backbone layer Based on the features of NG WDM
equipment and the cross-connect grooming capability, different equipment can be
applied to different network layers.

 OptiX OSN 1800 is mainly applied to the metropolitan convergence layer,


metropolitan access layer, short long haul backbone network, regional
backbone network, and local network. Supports 40/80 wavelength DWDM
and 8 wavelength CWDM system specifications and supports hybrid
transmission.

 OptiX OSN 8800 is mainly used in national backbone networks,


regional/provincial backbone networks, and some metro core sites.

 OptiX OSN 9800 is mainly used at the backbone and metro core layers.

 OptiX OSN 9800 and OSN 1800/8800 can form a complete OTN E2E network for
unified management
 NG WDM uses the L0+L1+L2 three-layer architecture. The L0 optical layer
supports wavelength multiplexing/demultiplexing and DWDM optical signal
adding/dropping. The L1 electrical layer supports cross-connection of ODUk/VC
services. The L2 layer implements Ethernet/MPLS-TP switching.

 Through the backplane bus, the system control board controls other boards. It
provides functions such as inter-board communication, service grooming between
boards, and power supply. The backplane bus includes: Control and
communication bus, electrical cross-connect bus, clock bus, etc.

 The functions of the modules in the figure are as follows:

 Optical-layer boards are used to process optical-layer services and


implements optical-layer grooming based on the λ level.

 Electrical-layer boards are used to process electrical-layer signals and


perform optical-to-electrical conversion for signals. Grooming granularities at
different levels can flexibly schedule electrical-layer signals through the
centralized cross-connect unit.

 The system control and communication board is the control center of the
equipment. It works with the network management system to manage the
boards of the equipment and realize the communication between the
equipment.

 The auxiliary interface unit provides input and output ports for clock/time
signals, alarm output and cascading ports, and alarm input/output ports.
 An OTU (Optical Transponder Unit) board converts client-side services into
standard optical signals after performing mapping, convergence, and other
procedures. The board also performs the reverse process.

 Huawei OTN product series support the use of separate tributary and line boards.
Tributary and line boards work with cross-connect boards. A tributary board plus a
line board together performs the functions of an OTU board. Different from an
OTU board, the tributary and line boards achieve more flexible and fine-grained
grooming of electrical services and offers a higher bandwidth utilization by
working with a cross-connect board.

 Optical multiplexer boards multiplex multiple optical signals into one ITU-T G.694-
compliant optical signal. Optical demultiplexer boards demultiplex one multiplexed
optical signal into individual ITU-T G.694-compliant optical signals.

 Optical amplifier boards are used to compensate for power loss caused by long
haul transmission in fiber communication systems. They are classified into erbium-
doped fiber amplifier (EDFA) boards and Raman boards.

 OSC boards transmit optical supervisory information between two NEs. OSC
boards provide high reliability of network monitoring because OSC boards
transmit an OSC signal using a wavelength different service wavelengths.
 The ASON is a new generation of the optical transmission network. The ASON
software provided by Huawei can be applied to NG WDM equipment to support
the evolution from a traditional network to an ASON network. Such evolution
complies with the ITU and IETF ASON/GMPLS-related standards. ASON technology
involves signaling switching and a control plane to enhance its network connection
management and recovery capability. ASON technology provides wavelength-level
ASON services at the optical layer and ODUk level ASON services at the electrical
layer. It also supports end-to-end service configuration and SLA.

 With ROADM technology, the NG WDM supports flexible optical-layer grooming


in one to nine degrees. The ROADM solution realizes reconfiguration of
wavelengths by blocking or cross-connecting of wavelengths. This ensures that the
static distribution of the wavelength resource is flexible and dynamic. ROADM with
U2000 can remotely and dynamically adjust the status of wavelength
adding/dropping and passing through for a maximum of 80 wavelengths, and
support 1-degree to 9-degree optical-layer grooming.

 LPT function applies to scenarios in which WDM equipment receives and transmits
Ethernet services. With the LPT function, lasers of WDM equipment can be
alternately turned on and off. The working status of the lasers enables client
equipment to know the link status. When knowing the link is in an abnormal state,
the client equipment will choose the working or protection link for communication
with the WDM equipment to protect the services transmitted between them.
 OSN 9800 U64 supports grooming of ODUk (k=0, 1, 2, 2e, 3, 4, flex) services. The
IU1~IU64 slots have the same cross-connect capacity. The maximum cross-
connect capacity of a single slot is 400Gbit/s.

 OSN 9800 U32 supports grooming of ODUk (k=0, 1, 2, 2e, 3, 4, flex) services and
Packets services. The IU1~IU32 slots have the same cross-connect capacity. The
maximum cross-connect capacity of a single slot is 400Gbit/s. In the current
version, the maximum packet service grooming capability of a subrack is 3.2Tbit/s.

 OSN 9800 U16 supports grooming of ODUk (k=0, 1, 2, 2e, 3, 4, flex) services. The
IU1~IU14 slots have the same cross-connect capacity. The maximum cross-
connect capacity of a single slot is 400Gbit/s, and the maximum cross-connect
capacity of a single subrack is 5.6Tbit/s.

 OSN 9800 P18 and OSN 9800 platform subracks are optical subracks. The main
differences between them are as follows:

 The number of slots is different: OSN 9800 P18 subrack has 23 slots. OptiX
OSN 9800 platform subrack has 21 slots.

 The slot distribution is different. The SCC boards in the OSN 9800 P18
subrack are installed in IU17 and IU18. The SCC board of the OSN 9800
platform subrack is inserted in IU1 and IU2.

 Mechanical specifications: The dimensions of the OSN 9800 P18 subrack are
497mm (W) ×295mm (D) ×400mm (H). The dimensions of an OptiX OSN
9800 platform subrack are 442mm (W) ×291mm (D) ×397mm (H).
 The ROADM technology works with the U2000 to adjust the wavelength
adding/dropping and pass-through status to dynamically adjust the wavelength
status. It supports a maximum of 80 wavelengths and supports flexible optical-
layer grooming from one dimension to 20 dimensions.

 OptiX OSN 9800 supports the 40/80x 100Gbit/s transmission solution. The
advanced modulation code pattern and coherent detection technology are used to
overcome the challenges of the 100Gbit/s system in terms of the transmission
physical effect, such as OSNR, CD tolerance, PMD tolerance, and non-linear, and
achieve long-distance transmission.

 The 80*40G without electrical relay transmission 1500km (eDQPSK coding) and the
40/80x100Gbit/s (ePDM-QPSK) transmission solution can be implemented through
specific coding. The Super WDM coding schemes such as NRZ, DRZ, ODB, and
eDQPSK can reduce the OSNR requirement of the system.

 The client side supports three types of pluggable optical modules: Enhanced Small
Form-Factor Pluggable (eSFP), 10 Gbit/s Small Form-Factor Pluggable (XFP), and
100 Gbit/s Small Form-Factor Pluggable (CFP).
 OSN 8800 T32 and OSN 8800 T64 have two types of subracks: Enhanced subrack
and general subrack. In addition to the cross-connect capacity, the enhanced
subrack and the general subrack have the same appearance and technical
specifications.

 OSN 8800 T64 subrack:

 Enhanced OSN 8800 T64 subrack, supporting 6.4T ODUk (k=0, 1, 2, 2e, 3, 4,
flex)

 General OSN 8800 T64 subrack, supporting 2.56T ODUk (k=0, 1, 2, 2e, 3, flex)

 OSN 8800 T32 subrack:

 Enhanced OSN 8800 T32 subrack, supporting 3.2T ODUk (k=0, 1, 2, 2e, 3, 4,
flex)

 General OSN 8800 T32 subrack, supporting 2.56T ODUk (k=0, 1, 2, 2e, 3, flex)

 OSN 8800 T16 subrack:

 Supports 640G ODUk (k=0, 1, 2, 2e, 3, and flex).


 The ROADM technology works with the U2000 to adjust the wavelength
adding/dropping and pass-through status to dynamically adjust the wavelength
status. It supports a maximum of 80 wavelengths and supports flexible optical-
layer grooming from one dimension to 20 dimensions.

 OptiX OSN 8800 supports the 40/80x 100Gbit/s transmission solution. The
advanced modulation code pattern and coherent detection technology are used to
overcome the challenges of the 100Gbit/s system in terms of the transmission
physical effect, such as OSNR, CD tolerance, PMD tolerance, and non-linear, and
achieve long-distance transmission.

 The 80*40G without electrical relay transmission 1500km (eDQPSK coding) and the
40/80x100Gbit/s (ePDM-QPSK) transmission solution can be implemented through
specific coding. The Super WDM coding schemes such as NRZ, DRZ, ODB, and
eDQPSK can reduce the OSNR requirement of the system.

 The client side supports three types of pluggable optical modules: Enhanced Small
Form-Factor Pluggable (eSFP), 10 Gbit/s Small Form-Factor Pluggable (XFP), and
100 Gbit/s Small Form-Factor Pluggable (CFP).

 OptiX OSN 8800 platform subrack and the OptiX OSN 8800 UPS do not have the
cross-connect capability. The differences between the OptiX OSN 8800 platform
subrack and the OptiX OSN 8800 UPS subrack are as follows: Dimensions, number
of slots, weight, management ports, and number of fans.
 OSN 1800 I chassis and OSN 1800 II compact chassis supports only the OTN plane.
OSN 1800 II packet chassis and OSN 1800 V chassis support the OTN&PKT&SDH
three planes, which are used together with the existing WDM equipment to
implement service expansion.

 In the OptiX OSN 1800 series, only the OSN 1800 I chassis and OSN 1800 II
compact chassis support xPON transmission.

 A DWDM system supports a maximum of 40 wavelengths.

 A CWDM system supports a maximum of eight wavelengths and the working


wavelength range is 1471nm~1611nm.
 OptiX OSN 1800 V chassis uses MS-OTN unified switching. A single subrack
supports a maximum of 800G OTN capacity and 800G packet capacity.
 FE: Fast Ethernet

 GE: Gigabit Ethernet

 ESCON: Enterprise systems connection

 FICON: Fiber connection

 FC: Fiber channel

 HDTV: High Definition TV

 DVB-ASI: Digital video broadcasting-asynchronous serial interface

 DVB-SDI: Digital video broadcasting-serial digital interface

 FDDI: Fiber distributed data interface


 Huawei provides two types of cabinets that comply with the ETS 300-119 standard:
N66B and N63B.

 Standard operating voltage: -48V DC/-60V DC.

 Operating voltage range:

 -48 V DC: -40V to -57.6V

 -60V DC: – 48 V to -72V

 A 400 mm high enclosure frame can be added to the top of the cabinet to ensure
that the cabinet height reaches 2600mm.
 The PIU board of the OptiX OSN 9800 U64/U32 has a magnetic circuit breaker,
which can be directly connected to the power distribution cabinet.

 In all subracks of the OptiX OSN 1800, OptiX OSN 8800 and OptiX OSN 9800
U16/P18/UPS subracks, the PDU distributes power from the power distribution
cabinet to the PIU.
 The DC power distribution box (PDU) of the N63B/N66B consists of the TN16,
TN51, PDU (DPD63-8-8), and TN11. (The OSN 9800U32/U64 subrack does not
require a DC power distribution box.)

 The functions of TN51PDU and TN16PDU are the same, and their heights are
different.

 When two OptiX OSN 8800 T32 subracks are installed in a cabinet, one DCM
frame can be configured when the TN16PDU is used.

 The DC power distribution box is installed in the upper part of the cabinet. The
input ports are divided into two parts: A and B. Each part has four power input
terminals and four power grounding terminals.

 The power input area of the TN16PDU is classified into the following two types
according to the current provided by the power supply equipment in the
equipment room:

 When eight 63 A current inputs are available, no copper fittings are required.

 When only four 125A inputs are available, a short-circuit copper bar is
required. The current of each 125A is divided into two 63 A currents.

 The PDU (DPD63-8-8) can replace the TN11PDU, TN16PDU, and TN51PDU.
 OptiX OSN 8800 T64 subrack is configured with eight PIU boards and uses the 1+1
hot backup mode to supply power to the system. OptiX OSN 8800 T32 subrack is
configured with four PIU boards, which work in 1+1 hot backup mode to supply
power to the system. The other subracks are configured with two PIU boards,
which work in 1+1 hot backup mode to supply power to the system.

 OptiX OSN 9800 U16 subrack is configured with four PIU boards. The number of
power inputs can be flexibly configured based on the load to achieve continuous
power expansion. OptiX OSN 9800 M24 subrack is configured with four PIU boards
and uses the 1+1 hot backup mode to supply power to the system.

 The functional versions of the PIU board of the OptiX OSN 9800 are as follows:
TNS1 (U16) and TNG2 (M24).

 The functional versions of the PIU board of the OptiX OSN 8800 are as follows:
TN11, TN15, TN16, TN18, and TN51. Only the TN16PIU supports the intelligent
electric meter function (detects the power consumption of the entire subrack and
reports the detected value to the main control module).

 The overcurrent protection function of the access power of each subrack is


implemented through the magnetic circuit breaker of the PDU.
 OptiX OSN 1800I/II/V subrack supports the AC power supply solution. The OSN
8800/9800 subrack does not support the AC power supply except the UPS.

 The APIU board occupies a large number of slots, which is seldom used on the live
network, this section describes only the AC power supply solution of the OSN
1800V.
 Slot 17, Slot 18, and Slot 19: When the DC power board PIU is inserted, the value is
fixed to Slot 17 and Slot 18. When the APIU is inserted, it is fixed to Slot 17 and
Slot 19. When the DC power board PIU is inserted, the Slot 19 can be installed with
the optical wavelength conversion (OTU) board, optical add/drop multiplexer
(OADM) board, optical amplifier board, multiplexer/demultiplexer board, optical
protection board, optical supervisory channel board, and auxiliary board.

 Slot 20 is used to house the FAN board.

 Note: The Slot 19 board cannot house the packet processing board, tributary
board, or line board because there is no cross-connect service bus in the slot.

 When the board that supports electrical ports is inserted into the OptiX OSN 1800
V chassis, the electrical port is restricted by the chassis. The maximum number of
available electrical ports must meet the following requirements:

 When an unshielded network cable is used, a single layer of the chassis


supports a maximum of six network cables. A single layer supports a
maximum of six electrical module interfaces.

 When shielded network cables are used, a single layer of the chassis provides
four network cables. A single layer supports a maximum of four electrical
module interfaces.
 Wavelength range: DWDM 1529.16 nm~1560.61 nm (C Band, ITU-T G.694.1)

 Maximum rate of a single channel: 400 Gbit/s (OTUC4)

 Network topology: Point-to-point, chain, star, ring, ring with chain, tangent ring,
intersecting ring, and mesh networking

 Network-level protection: Client 1+1 protectiona, ODUk SNCP, OLPa, Intra-board


1+1 protectiona, Tributary SNCP, and LPT.

 Synchronizationb:

 IEEE 1588v2

 2 Mbit/s or 2 MHz external clock access source (with the SSM function),
meeting the ITU-T G.703 standard

 External time access source: (1PPS+TOD)

 Device running environment:

 Temperature: Long-term: 5°C~40°C, short-term: -5°C~45°C

 Humidity of the subrack: Long-term: 5%~85%, short-term: 5%~90%

 Note:

 a: This feature must be used together with OSN 9800 P18/OSN 8800
platform subrack/OSN 8800 T16/OSN 8800 UPS.

 b: This feature requires the support of the OSN 8800 T16 subrack.
 OptiX OSN 9800 U64 equipment has integrated the OptiX OSN 9800 U64 subrack
in a cabinet and provides board slots on both the front and rear sides. Boards
need to be installed in the designated slots. The equipment runs on -48 V DC or -
60 V DC and is divided into different areas in which boards are powered by
designated PIU boards in different slots.

 PIU boards are located in the power and interface area. If an area has the same
background color as a PIU board, then the PIU board powers the boards located in
this area.

 The fan tray assemblies are used to ventilate the equipment.

 Fiber patch cords connecting to boards are routed to the left or right side of the
equipment through the upper- and lower-side fiber troughs.

 Service boards need to be configured based on the service plan and all of them
are installed in the two service board areas.

 Cross-connect boards are configured in M:N backup mode to implement cross-


connections for services boards on the front and rear sides.

 The system control boards are configured in 1+1 backup mode. The active system
control board manages and provides a clock to all other boards in the equipment.
It also provides for inter-NE communication.

 When a U64 subrack is used as a pure regeneration subrack, no cross-connect


board is required.
 OptiX OSN 9800 U64 provides various management interfaces.

 EFI:

 SubRACK_ID: the LED on the front panel that displays the subrack ID. By
observing the subrack ID displayed on this LED, the user can determine
whether the subrack is a master or slave subrack in a multi-subrack
configuration. This function is reserved.

 LAMP: subrack alarm output/cascading interface.

 ALMI/ALMO1/ALMO2: housekeeping alarm input/output interfaces.

 SERIAL: management serial interface.

 CLK1-CLK2/ TOD1-TOD2 : clock/time signal input and output interfaces.

 CTU:

 GE1/GE2: Connects to the GE1/GE2 interface in another OptiX OSN


9800 U64/U32/U16 subrack though a network cable to implement
multi-subrack communication.

 NM: Connects to the network interface on the NMS computer through a


network cable, so that the NMS can manage OptiX OSN 9800. Connects
to the NM interface on another NE through a network cable to
implement inter-NE communication. This NE can be OptiX OSN 9800,
OptiX OSN 8800.
 Wavelength range: DWDM 1529.16 nm~1560.61 nm (C Band, ITU-T G.694.1)

 Maximum rate of a single channel: 400 Gbit/s (OTUC4)

 Network topology: Point-to-point, chain, star, ring, ring with chain, tangent ring,
intersecting ring, and mesh networking

 Network-level protection: Client 1+1 protectiona, ODUk SNCP, OLPa, Intra-board


1+1 protectiona, Tributary SNCP, and LPT.

 Synchronizationb:

 IEEE 1588v2

 2 Mbit/s or 2 MHz external clock access source (with the SSM function),
meeting the ITU-T G.703 standard

 External time access source: (1PPS+TOD)

 Device running environment:

 Temperature: Long-term: 5°C~40°C, short-term: -5°C~45°C

 Humidity of the subrack: Long-term: 5%~85%, short-term: 5%~90%

 Note:

 a: This feature must be used together with OSN 9800 P18/OSN 8800
platform subrack/OSN 8800 T16/OSN 8800 UPS.

 b: This feature requires the support of the OSN 8800 T16 subrack.
 Boards need to be installed in the designated slots in the subrack. The subrack
runs on -48 V DC or -60 V DC .9800 U32 subracks are classified into two types,
namely 9800 U32 Standard and 9800 U32 Enhanced.

 PIU boards are located in the power and interface area. If an area has the same
background color as a PIU board, then the PIU board powers the boards located in
this area. The PIU boards are in mutual backup. Therefore, the failure of any power
input to the equipment does not affect the normal operation of the equipment.

 The fan tray assemblies are used to ventilate the equipment.

 Fiber patch cords connecting to boards are routed to the left or right side of the
equipment through the upper- and lower-side fiber troughs.

 Service boards need to be configured based on the service plan and all of them
are installed in the two service board areas.

 Cross-connect boards are configured in M:N backup mode to implement cross-


connections for services boards on the front and rear sides.

 The system control boards are configured in 1+1 backup mode. The active system
control board manages and provides a clock to all other boards in the equipment.
It also provides for inter-NE communication.

 When a U32 subrack is used as a pure regeneration subrack, no cross-connect


board is required.
 Wavelength range: DWDM 1529.16 nm~1560.61 nm (C Band, ITU-T G.694.1)

 Maximum rate of a single channel: 400 Gbit/s (OTUC4)

 Network topology: Point-to-point, chain, star, ring, ring with chain, tangent ring,
intersecting ring, and mesh networking

 Network-level protection: Client 1+1 protectiona, ODUk SNCP, OLPa, Intra-board


1+1 protectiona, Tributary SNCP, and LPT.

 Synchronizationb:

 IEEE 1588v2

 2 Mbit/s or 2 MHz external clock access source (with the SSM function),
meeting the ITU-T G.703 standard

 External time access source: (1PPS+TOD)

 Device running environment:

 Temperature: Long-term: 5°C~40°C, short-term: -5°C~45°C

 Humidity of the subrack: Long-term: 5%~85%, short-term: 5%~90%

 Note:

 a: This feature must be used together with OSN 9800 P18/OSN 8800
platform subrack/OSN 8800 T16/OSN 8800 UPS.

 b: This feature requires the support of the OSN 8800 T16 subrack.
 Boards need to be installed in the designated slots in a subrack. The subrack runs
on -48 V DC or -60 V DC and is divided into multiple areas in which boards are
powered by designated PIU boards in different slots. The subrack can be installed
in an ETSI cabinet or a 19-inch cabinet.

 The PIU boards in slots IU68 and IU80, and the PIU boards in slots IU69 and IU81
are in mutual backup. Therefore, the failure of any power input to the subrack
does not affect the normal operation of the subrack.

 The EFI board provides maintenance and management interfaces.

 The CTU boards manage the subrack, provide clock for service boards, and
implement inter-NE communication. Two CTU boards are configured for mutual
backup.

 The cross-connect boards groom services between service boards. Cross-connect


boards are configured in M:N backup mode. When a U16 subrack is used as a pure
regeneration subrack, no cross-connect board is required.

 Fan tray assemblies are used to ventilate the equipment.

 Fiber patch cords connecting to boards are routed to the left or right side of the
equipment through the upper- and lower-side fiber troughs.

 Service boards need to be configured based on the service plan and all of them
are installed in the service board area.
 The PIU boards are in mutual backup. Therefore, the failure of any power input to
the equipment does not affect the normal operation of the equipment. The EFI
board provides maintenance and management interfaces. The EFI board is
powered by the CXP board.

 Fan area: Two fan assemblies provide ventilation and heat dissipation for the
subrack.

 Fiber trough: Two fiber routing troughs. The pigtails led out from the optical port
of the board are routed into the fiber spool on the side of the cabinet through the
fiber trough.

 Service board area: One service board area. There are 24 service board slots in
total. Configure service boards according to the service planning. All service
boards need to be inserted into this area. The ejector lever is on the left. Two small
slots can be combined into one large slot. One small slot has a high 5.5U, and one
large slot has a high 11U. A maximum of 12 slots are supported.

 Two CXP boards work in 1+1 backup mode to provide system control and
communication functions. They manage and provide clock signals for all other
boards in the subrack, implement inter-NE communication, and provide cross-
connections and service grooming between service boards.
 The specifications of the OptiX OSN 8800 universal platform subrack are the same
as those of the OptiX OSN 9800 universal platform subrack.
 When inserting a board into a subrack, comply with the slot assignment
requirements. Subracks are classified into the following types: Interface area, board
area, fiber trough, and fan area.

 Interface area: The EFI provides maintenance and management interfaces.

 Board area: Slots IU1 to IU16 can be used to house service boards. When a
universal platform subrack is used as the master subrack, it is recommended that
you configure two SCC boards or one SCC board.

 When two SCC boards are configured, the two SCC boards back up each
other and are inserted in slots IU1 and IU2.

 When only one SCC board is configured, it can be installed in slot IU1 or IU2.
When the SCC board is inserted in slot IU1, slot IU2 can house service boards.
When the SCC board is installed in slot IU2, the service board cannot be
inserted in slot IU1.

 When a universal platform subrack is used as a slave subrack, the SCC board
cannot be configured. In this case, the IU1 and IU2 slots can house service boards.

 Fiber trough: The fiber jumper led out from the optical port on the front panel of
the board passes through the fiber routing area and fiber spool and then enters
the side panel of the cabinet.
 Structure:

 Board area: All the boards are installed in this area. 93 slots are available.

 Fiber cabling area: Fiber jumpers from the ports on the front panel of each
board are routed to the fiber cabling area before being routed on a side of
the open rack.

 Fan tray assembly: Four fan tray assemblies are available for this subrack.
Each fan tray assembly contains three fans that provide ventilation and heat
dissipation for the subrack. The front panel of the fan tray assembly has four
indicators that indicate fan status and related information.

 Air filter: It protects the subrack from dust in the air and requires periodic
cleaning.

 Fiber spool: Rotable fiber spools are on two sides of the subrack. Extra fibers
are coiled in the fiber spool on the open rack side before being routed to
another subrack.

 Mounting ears: The mounting ears attach the subrack in the cabinet.

 Note: A subrack identified by "Enhanced" is an enhanced OptiX OSN 8800 T64


subrack, and the one that is not identified by "Enhanced" is a universal OptiX OSN
8800 T64 subrack. These two types of subracks are displayed as OSN8800 T64
Enhanced and OSN8800 T64 Standard respectively on the U2000.
 IU1-IU8, IU11-IU42, and IU45-IU68 are reserved for service boards.

 IU71 is reserved for the EFI2. IU76 is reserved for the EFI1.

 IU87 is reserved for the ATE.

 IU69, IU70, IU78, IU79, IU80, IU81, IU88, and IU89 are reserved for the PIU.

 IU72, IU83, IU73 and IU84:

 OptiX OSN 8800 T64 General subrack: IU72 and IU83 are used to house AUX
boards, and IU73 and IU84 are reserved for future use.

 OptiX OSN 8800 T64 Enhanced subrack: IU72 and IU83 are used to house the
active AUX boards, IU73 and IU84 are used to house the standby AUX boards.

 IU75 and IU86 are reserved for the STG. IU82 is reserved for the STI.

 IU74 and IU85 are reserved for the SCC.

 IU9 and IU43 are reserved for the cross-connect board.

 Enhanced OptiX OSN 8800 T64 subrack: TNK2UXCT or TNK4XCT.

 General OptiX OSN 8800 T64 subrack: TNK4XCT or TNK2XCT.

 IU10 and IU44 are reserved for the cross-connect board.

 Enhanced OptiX OSN 8800 T64 subrack: TNK2USXH, TNK4SXH or TNK4SXM.

 General OptiX OSN 8800 T64 subrack: TNK4SXH, TNK2SXH, TNK4SXM, or


TNK2SXM.
 Structure:

 Board area: All the boards are installed in this area. 50 slots are available.

 Fiber cabling area: Fiber jumpers from the ports on the front panel of each
board are routed to the fiber cabling area before being routed on a side of
the open rack.

 Fan tray assembly: Fan tray assembly contains three fans that provide
ventilation and heat dissipation for the subrack. The front panel of the fan
tray assembly has four indicators that indicate subrack status.

 Air filter: It protects the subrack from dust in the air and requires periodic
cleaning.

 Fiber spool: Rotable fiber spools are on two sides of the subrack. Extra fibers
are coiled in the fiber spool on the open rack side before being routed to
another subrack.

 Mounting ears: The mounting ears attach the subrack in the cabinet.

 Note: A subrack identified by "Enhanced" is an enhanced OptiX OSN 8800 T32


subrack, and the one that is not identified by "Enhanced" is an universal OptiX
OSN 8800 T32 subrack. These two types of subracks are displayed as OSN8800
T32 Enhanced and OSN8800 T32 Standard.
 IU1-IU8, IU12-IU27, and IU29-IU36 are reserved for service boards.

 IU37 is reserved for the EFI2. IU38 is reserved for the EFI1.

 IU48 is reserved for the ATE.

 IU47 is reserved for the STI.

 IU39, IU40, IU45 and IU46 are reserved for the PIU.

 IU41 and IU43:

 OptiX 8800 T32 General subrack: IU41is reserved for the AUX, IU43 is
reserved for future use.

 OptiX 8800 T32 Enhanced subrack: IU41 is reserved for active AUX, IU43 is
reserved for standby AUX.

 IU42 and IU44 are reserved for the STG.

 IU28 is reserved for the active SCC.

 IU11 is available for the standby SCC or the other boards.

 IU9 and IU10 are reserved for the cross-connect board.

 OptiX OSN 8800 T32 General subrack: XCH or XCM.

 OptiX OSN 8800 T32 Enhanced subrack: UXCH, UXCM, XCH or XCM.

 IU50 and IU51 are reserved for the fans.


 Structure:

 Board area: All the boards are installed in this area. 24 slots are available.

 Fiber cabling area: Fiber jumpers from the ports on the front panel of each
board are routed to the fiber cabling area before being routed on a side of
the open rack.

 Fan tray assembly: Fan tray assembly contains ten fans that provide
ventilation and heat dissipation for the subrack. The front panel of the fan
tray assembly has four indicators that indicate fan status and related
information.

 Air filter: It protects the subrack from dust in the air and requires periodic
cleaning.

 Fiber spool: Rotable fiber spools are on two sides of the subrack. Extra fibers
are coiled in the fiber spool on the open rack side before being routed to
another subrack.

 Mounting ears: The mounting ears attach the subrack in the cabinet.
 IU1-IU8, and IU11-IU18 are reserved for service boards.

 IU19 is reserved for the EFI.

 IU20 and IU23 are reserved for the PIU.

 IU21 and IU22 are reserved for the AUX.

 IU24 is reserved for the ATE.

 IU25 is reserved for the fans.

 IU9 and IU10 are reserved for the XCH or for the other service boards.

 Note:

 Slots IU9 and IU10 can be used to house other service boards only when the
OptiX OSN 8800 T16 functions as a slave subrack.

 Each of slots IU9 and IU10 must be filled with a filler panel when they are
used to house service boards.
 The functional versions of the ATE board are as follows: TN16 and TN51.

 The number of ports on the ATE board varies according to the board version. The
specifications vary according to the board version. The TN16ATE and TN51ATE
boards cannot substitute for each other.
 The functional versions of the EFI board are as follows: TN15, TN16, and TN18.

 The number of interfaces on the EFI board varies according to the board version.
The specifications vary according to the board version. Boards of different versions
cannot substitute for each other.

 Note: LAMP interface provides a 5V power supply and can be used only to connect
to the alarm indicator of the cabinet. The RJ45 network cables of the NM_ETH, ETH,
ALMO, and CLK interfaces cannot be inserted into the LAMP interface. Otherwise,
the EFI board or the customer's meters and devices may be damaged.
 OptiX OSN 9800 U64/U32 (TNV3EFI), OSN 9800 U16 (TNS2EFI), and OSN 9800M24
(TNG1EFI) provide various management interfaces.

 SubRACK_ID: Displays the master/slave relationship of each subrack during


subrack cascading. 0 indicates that the subrack where the board resides is
the master subrack. "EE" indicates that the subrack ID cannot be read. Other
values indicate that the subrack where the board resides is a slave subrack.

 LAMP: subrack alarm output/cascading interface

 ALMI/ALMO1/ALMO2: housekeeping alarm input/output interfaces

 SERIAL: management serial interface

 CLK1-CLK2: clock signal input and output interfaces

 TOD1-TOD2: time signal input and output interface

 Door Alarm: cabinet door access control alarm interface

 CE_CLK1/CE_CLK2: clock extended port of the cluster

 NM_ETH1/NM_ETH2: network management interfaces

 The EFI board has a DIP switch that controls the subrack ID. The default value is
00000.
 The master and slave subracks are connected through the ETH1/ETH2/ETH3 port
on the EFI2 board.

 There are DIP switches inside the EFI1 board. The EFI2 board is connected to the
master subrack through the ETH1, ETH2, or ETH3 interface. The ID of each subrack
is set by using two DIP switches on the EFI1 board. The value that can be set by
using each of the two DIP switches on the EFI1 board is a binary value 0 or 1. ID1-
ID4 correspond to bits 1-4 of SW2, and ID5-ID8 corresponding to bits 1-4 of SW1.
Among these ID values, only ID1-ID5 are valid. ID6-ID8 are reserved. The bits from
high to low are ID5- ID1, by which a maximum of 32 states can be set. The value is
00000 by default. "0" indicates the master subrack. The other values indicate slave
subracks.

 When the DIP switch is ON, the value of the corresponding bit is set to 0.

 As shown in Figure, the value represented by the ID5-ID1 is 000001, which is 1 in


decimal system. That is, the subrack ID is 1.
 OSN 1800 OADM frame cannot be used independently. It can be used only as an
extended chassis of the OSN 1800 I or OSN 1800 II compact chassis to increase
the number of wavelengths accessed by a single NE and achieve low-cost
networking. Use a straight-through cable to connect the OCTL port on the SCC
board to the OCTL port on the CTL board.

 The extended OADM frame uses the same chassis as the OSN 1800 I. It can house
only the OADM, FIU, and SCS boards.

 There is no main control board, fan module, or power board. The positions of the
original fan and power board are CTL.
 OSN 1800 II packet chassis does not support the OADM frame. It has a large
service access capacity and can groom the convergence nodes of the electrical
signals at the electrical layer, thus enhancing the network flexibility.

 OTN:

 Optical layer: Supports the insertion and demultiplexing of any 2/4/8


adjacent wavelengths on the OTM and OADM boards.

 Electrical-layer: Supports intra-board cross-connections of


ODU0/ODU1/ODUflex/ODU2/ODU4.

 Packet: 60Gbit/s. The following figure shows the access capacity of each slot.

 SDH: 20Gbit/s higher order cross-connect capacity, 5Gbit/s lower order cross-
connect capacity The following figure shows the access capacity of each slot.
 OptiX OSN1800V has a large service access capacity. It can groom the
convergence nodes of the electrical signals at the electrical layer, thus enhancing
the network flexibility.

 Slot 17, Slot 18, and Slot 19: When the DC power board PIU is inserted, the value
of this parameter is fixed to Slot 17 and Slot 18. When the APIU is inserted into the
AC power board, it is fixed to Slot 17 and Slot 19. When the DC power board PIU is
inserted, the Slot 19 can be installed with the optical wavelength conversion (OTU)
board, optical add/drop multiplexer (OADM) board, optical amplifier board,
multiplexer/demultiplexer board, optical protection board, and auxiliary board.

 The Slot 19 board cannot house the packet processing board, tributary board, or
line board because there is no cross-connect service bus in the slot.

 OptiX OSN 1800 V uses MS-OTN unified switching. A single subrack supports a
maximum of 800G OTN capacity, 800G packet capacity, 280G SDH higher order
capacity, and 20G SDH lower order capacity.
 The DCM frame is used to house the Dispersion Compensation Module (DCM).

 When an optical signal is transmitted in a certain distance on the line, the pulse of
the optical signal is broadened due to the accumulation of the positive dispersion,
which seriously affects the transmission performance of the system. The DCM uses
the passive compensation method to cancel the positive dispersion of the
transmission fiber by using the negative dispersion of the dispersion
compensation fiber. In this way, the signal pulse is compressed.

 According to different implementation principles, the DCM is classified into two


types: DCM based on the DCF (dispersion compensation fiber) and DCM based on
the FBG (fiber bragg grating).

 The insertion loss of the DCM (DCF) increases with the distance.

 The insertion loss of the FBG (DCM) is not related to the distance. The insertion
loss is always 4 dB.

 The OptiX OSN 8800 provides eight types of DCM optical modules: 5km, 10 km, 20
km, 40 km, 60km, 80 km, 100 km, and 120km.

 The DCM module has three important parameters: Compensate the fiber type,
compensation distance, and attenuation.

 Each DCM frame can house a maximum of two DCM modules. The DCM frame is
fixed on the column of the cabinet through mounting ears and screws.
 Reference answer:

 The subrack ID of the OSN8800 T64 or OSN8800 T32 subrack is set on the EFI1 board.
The subrack ID of the OSN8800 T16 subrack or OptiX OSN 8800 universal platform
subrack is set on the EFI board. The subrack ID of the OptiX OSN 8800 platform subrack
is set on the AUX board.
 OTU realizes the O-E-O conversion between client side signals and WDM side
standard signals.

 At the client side, no special requirement for the service, while the service should
be in compliance with some standards such as ITU-T or IEEE. eSFP for less than
2.5Gbps and XFP for 10Gbps can be configured based on customer requirement.

 At the WDM side, output signals are in compliance with some standards such as
G.694 for wavelength allocation, G.709 for OTN and G.975 for FEC.

 As most of Huawei OTUs support OTN, GCC0/1 byte in the overheads can be used
for ESC transmission to save the cost of OSC.

 ePDM-QPSK is short for polarization-multiplexed quadrature phase-shift keying


and is developed based on the differential quadrature phase shift keying (DQPSK)
technology. ePDM-QPSK is a preferred solution for 100G WDM transmission.

 ePDM-BPSK is short for polarization-multiplexed binary phase shift keying and is


developed based on ePDM-QPSK. ePDM-BPSK is intended for 40G ultra long-haul
transmission.

 During the intermediate processing, alarm and performance will be reported to the
system. Because of O-E-O, it provides Retiming, Re-shaping and Regenerating
functions.
 ALS means Automatic Laser Shutdown function, WDM ALS could transparent SDH
ALS indication.
 Note: "-"indicates that there is no letter, that is, there may be a board that does
not have a fourth letter.

 The OSN 9800U32/U64/U16 does not support this type of boards.


 LSX: 10Gbit/s wavelength conversion board

 LDX: 2 x 10 Gbit/s wavelength conversion unit

 LSXL: 40Gbit/s wavelength conversion board

 LSQ: 40Gbit/s wavelength conversion board

 LSC: 100Gbit/s wavelength conversion board

 LTX: 10-Port 10Gbit/s Service Multiplexing & Optical Wavelength Conversion


Board
 LDM: 2-channel multi-rate (100Mbit/s-2.5Gbit/s) wavelength conversion board

 LQM: 4-channel multi-rate (100Mbit/s-2.5Gbit/s) OTU1 wavelength conversion


board

 LOM: 8-port multi-service multiplexing & optical wavelength conversion board

 LOA: 8 x Any-rate MUX OTU2 wavelength conversion board.

 LOG: 8 x Gigabit Ethernet unit

 LEM24: 22 x GE + 2 x 10GE and 2 x OTU2 Ethernet Switch board

 LEX4: 4 x 10GE and 2 x OTU2 Ethernet Switch Board


 Note: The LQM2/LQM/LDGF2/LDGF board can access GE services at optical ports
or GE services at electrical ports.

 ELOM: enhanced 8-channel any rate service convergence wavelength conversion


board (supported only by the OptiX OSN 1800)
 xPON: x Passive Optical Network

 GPON: gigabit-capable passive optical network

 Note: The RX1/TX1~RX4/TX4 port on the LQPL/LQPU board is used to access


GPON services and STM-16/OTU1 services.
 TX/RX indicates the client side and IN/OUT indicates the WDM side.
 Basic function: LSC convert signal as follows:1x 100GE/OTU4<->1x OTU4

 100GE: Ethernet service at a rate of 103.125 Gbit/s.

 OTU4: OTN service at a rate of 111.71 Gbit/s.

 OTN function: Supports the OTN frame format and overhead processing compliant
with ITU-T G.709. Maps one channel of client-side service signals into OTU4 signals.
Supports SM and PM functions for OTU4 and ODU4. Supports TCM functions and
TCM non-intrusive monitoring for ODU4.

 PRBS function: Supports the PRBS function on the WDM side.

 CFP2: Supports 100 GE pluggable optical modules on the client side.

 Loopback: Support inloop and outloop in both client and WDM side.

 The TN18LSC support WDM-side pluggable optical module.

 NOTE: To prevent the cabinet door from squeezing fibers, the board can only use
G.657A2 fibers.
 Client-side pluggable optical module specifications (100GE/OTU4 services)

Parameter Unit Value

(100G BASE-4×25G)/(OTU4-
Optical Module Type -
4×28G)-10 km-CFP2

Line Code Format - NRZ

Optical Source Type - SLM

Target Transmission Distance - 10 km (6.2 mi.)

Total Average Launch Power 100GE: 1.7


dBm
(Min) OTU4: 3.5
Total Average Launch Power 100GE: 10.5
dBm
(Max) OTU4: 8.9
Average Launch Power per Lane 100GE: -4.3
dBm
(Min) OTU4: -2.5
Average Launch Power per Lane 100GE: 4.5
dBm
(Max) OTU4: 2.9

Transmit OMA per Lane (Min) dBm -1.3 (Only for 100GE)

Transmit OMA per Lane (Max) dBm 4.5 (Only for 100GE)
 The main functions and features supported by the ELOM board are wavelength conversion,
service convergence, and ALS.
 WDM specifications: Supports ITU-T G.694.1-compliant DWDM specifications and ITU-T
G.694.2-compliant CWDM specifications.
 F2ELOM (STND) boards support the following board and port working modes:

 1*AP8 general mode, the board supports the following port working modes:

 ODU0 non-convergence mode

 ODU1 convergence mode

 ODU1 non-convergence mode

 ODUflex non-convergence mode

 1*AP1 ODU2 mode, the board supports the following port working mode: 1 x Any
(4.9 Gbit/s to 10.5 Gbit/s)/InfiniBand 5G <-> 1 x OTU2/OTU2e

 1*AP1 ODUflex mode, the board supports the following port working mode: 1 x CPRI
option6/FC800/FICON 8G/InfiniBand 5G <-> 1 x OTU2
 Basic function:

 Converts 22 channels of GE/FE services and two channels of 10GE WAN/10GE


LAN services received directly on the client side into two channels of
standard WDM wavelength OTU2 signals and performs the reverse process.

 Converges multiple flat-rate GE or 10GE services into one channel of 10GE


service.

 QoS (Quality of Service)

 Supports committed access rate (CAR) and class of service (CoS).

 Supports IEEE802.1p.

 Supports DSCP.

 LAG (Link Aggregation Group)

 Supports the IEEE802.3ad-compliant LAG protocol running at IP and trunk


ports.

 Supports manual and static LAGs.

 Supports load-sharing and non-load-sharing LAGs.

 Flow control: Supports IEEE802.3X-compliant Ethernet flow control protocol and


flow control termination.

 Maximum transmission unit (MTU): Supports a maximum of 9600 bytes frames.


 A OTN tributary board receives client-side services, performs O-E conversion,
maps the services into ODUk containers, and lastly sends the ODUk electrical
signals to cross-connect board for centralized cross-connection.

 A line board multiplexes and maps ODUk electrical signals cross-connected from
cross-connect board and performs conversion between OTUk optical signals and
standard wavelengths.
 Tributary unit only has client side. The port name is TX/RX.

 Line unit only has WDM side. The port name is IN/OUT.
 The N401 board uses coherent detection technology and is targeted for coherent
communications.

 When the N401 board works in relay mode and ESC is used for communication,
you are advised to install the N401 boards in adjacent paired slots. If they are not
installed in adjacent paired slots, you must manually establish the MAC connection
of DCN.

 OptiX OSN 9800 U64 Standard subrack: You are advised to install a pair of
N401 boards in relay mode in any of the following slot pairs: IU1/IU2,
IU3/IU4, ..., IU61/IU62, and IU63/IU64.

 OptiX OSN 9800 U32 Standard subrack: You are advised to install a pair of
N401 boards in relay mode in any of the following slot pairs: IU1/IU2,
IU3/IU4, ..., IU29/IU30, and IU31/IU32.

 OptiX OSN 9800 U16 subrack: You are advised to install a pair of N401
boards in relay mode in any of the following slot pairs: IU1/IU2, IU3/IU4, ......,
IU11/IU12, and IU13/IU14.

 OptiX OSN 9800 M24 subrack: You are advised to install a pair of N401
boards in relay mode in any of the following slot pairs: (IU1, IU13)/(IU2, IU14),
(IU3, IU15)/(IU4, IU16), ..., (IU9, IU21)/(IU10, IU22), and (IU11, IU23)/(IU12,
IU24).
 NS4 converts signals as follows:

 80xODU0/80xODUflex/40xODU1/10xODU2(e)/2xODU3/1xODU4<->1xOTU4.

 Supports mixed transmission of ODU0, ODU1, ODUflex, ODU2, ODU2e, and ODU3
signals.

 Functional block diagram of the board


 As a type of line board, the ND2 board converts 16 x ODU0, 8 x ODU1, 4 x
ODUflex, or 2 x ODU2 signals into 2 x ITU-T G.694.1-compliant OTU2 signals or
converts 2 x ODU2e signals into 2 x ITU-T G.694.1-compliant OTU2e signals. The
board supports hybrid transmission of ODU0, ODU1, ODU2/ODU2e, and ODUflex
signals.

 Supports ITU-T G.694.1-compliant dense wavelength division multiplexing (DWDM)


specifications.

 Supports the optical tunable transponder. Equipped with this module, the board
can tune the optical signal output on the WDM side within the range of 80
wavelengths in C-band with the channel spacing of 50 GHz.

 Adopts the ITU-T G.709-compliant frame format and overhead processing.

 Provides path monitoring (PM) for ODU2, ODUflex, ODU1, and ODU0.

 Provides section monitoring (SM) for OTU2.

 Provides tandem connection monitoring (TCM) for ODU2, ODU1, and ODU0.
 As an OTN tributary board, the V3T210/V3T220/V3T230 board can support a mix
of ODU0, ODU1, ODU2(e), and ODUflex signals.

 As an OTN tributary board, the T130 board can accept a maximum of 80 Gbit/s
client services. The board supports a mix of ODU0, ODU1, and ODUflex signals.
 Basic function:

 ODU0 non-convergence mode (Any->ODU0)

 ODU1 non-convergence mode (Any->ODU1)

 ODU2 non-convergence mode (OTU2/Any->ODU2)

 ODUflex non-convergence mode (Any->ODUflex)

 ODU1_ODU0 mode (OTU1->ODU1->ODU0)

 ODU1 convergence mode (n*Any->ODU1)

 Signal Flow:

 Transmit direction: The client-side optical module receives optical signals


through the RX1-RXn optical interfaces, converts them into electrical signals,
and sends the electrical signals to the signal processing module. Then it
performs operations such as signal mapping and OTN framing. Then the
module produces and sends ODUk signals to the backplane for grooming.

 Receive direction: The signal processing module receives ODUk electrical


signals from the cross-connect board through the backplane. Then the
module performs operations, such as ODUk framing and demapping. Lastly,
the module produces and sends signals to the client-side optical module.
When the client-side optical module receives the signals, it performs E/O
conversion. Then the module produces and sends the optical signals to the
client equipment through the TX1-TXn optical interfaces.
 3G-SDI: Video service at a rate of 2.97 Gbit/s

 FC400: SAN service at a rate of 4.25 Gbit/s

 Note: When receiving OTU1 services, the TX8/RX8 optical port cannot process IEEE
1588v2 clock signals.

 Functional block diagram of the board


 The TSC consists of the client-side optical module, signal processing module,
control and communication module, and power supply module.

 Transmit direction: The client-side optical module receives one channel of


100GE/OTU4 service signals through the RX interface, and performs the O/E
conversion. After O/E conversion, the electrical signals are sent to the signal
processing module. The module performs operations such as encapsulation and
mapping processing, OTN framing, and encoding of FEC. Then, the module sends
out one channel of ODU4 or n channel ODUflex (n = 1-80) electrical signals to the
backplane for grooming.

 Receive direction: The signal processing module receives one channel of ODU4 or
n channel ODUflex (n = 1-80) electrical signals from the cross-connection board
through the backplane. The module performs operations such as ODU4/ODUflex
framing, demapping and decapsulation processing. Then, the module sends out
one channel of 100GE/OTU4 signals to the client-side optical module. The client-
side optical module performs the E/O conversion of one channel of 100GE/OTU4
signals, and then outputs one channel of client-side optical signal through the TX
optical interface.
 A universal line board supports hybrid transmission of OTN, SDH and packet
services. Compared to an OTN line board, a universal line board additionally
supports packet services and SDH services. In other words, a universal line board
grooms a wider range of electrical signals and offers higher bandwidth utilization.

 A universal line board receives and processes ODUk signals, VC-4 signals, or
packets signals from the cross-connect board. When receiving packets signalsor
VC-4 signals, the board processes the services, then maps them into ODUk signals,
performs multiplexing and E/O conversion, and sends out an OTUk optical signal
carried over an ITU-T G.694.1-compliant DWDM wavelength.

 The difference between the functions of the universal line boards lies in the service
rate, number of channels, and types and quantity of services that can be processed
on the WDM side.
 HUNQ2 application scenario:

 HUNS3 application scenario:


 Each port on a general service processing board can be used as a tributary port or
a line port. Therefore, the general service processing board provides the functions
of both OTN tributary and line boards.
 G210: 10 x 10 Gbit/s General Service Processing Board

 G220: 20 x 10 Gbit/s General Service Processing Board

 G402: 2 x 100 Gbit/s General Service Processing Board

 G404: 4 x 100 Gbit/s General Service Processing Board

 The G210/G220/G402/G404 board is a general service processing board. Each port


on the board can be used as a tributary port , a line port (line mode) or a line port
(relay mode).
 OptiX OSN 8800 also supports universal service processing board. The board name
is GS4.

 The GS4 board is a general service processing board (for gray optical signals) that
converts the 80xODU0, 40xODU1, 10xODU2, 10xODU2e, 2xODU3, 1xODU4, or
80xODUflex into one OTU4 signal. The board supports hybrid transmission of the
ODU0 service, ODU1 service, ODU2/ODU2e service, ODU3 service and the ODUflex
service.
 Packet service boards are classified into two types: boards that process Ethernet
services and boards that process TDM services. The packet boards that process
Ethernet services provide a resilient LSP transport pipe to receive and transmit
data packets and perform MPLS switching for the packets. They use MPLS-TP APS
and MPLS-TP PW FPS to provide carrier-class protection for data services, offer a
variety of OAM methods such as MPLS-TP OAM and ETH-OAM, and implement
bandwidth management using QoS. The boards that process TDM services, also
called CES/CEP boards, use the PWE3 mechanism to simulate the basic behaviors
and characteristics of TDM services on PSNs, enabling the PSNs to carry TDM
services.
 As a packet service board, the E124/E208/E212/E302/E401 board receives and
transmits FE/GE/10GE/40GE/100GE services, processes packet services, and
transmits packets to the cross-connect board for centralized cross-connections.

 EC116/EC404 are applicable to the time division multiplexing (TDM) pseudo wire
emulation edge-to-edge (PWE3) scenario. In this scenario, the PWE3 mechanism is
used to simulate TDM services and connect the traditional TDM network to the
packet switched network (PSN).
 EG16: 16-port gigabit ethernet switch board

 EX2: 2 x 10GE ethernet packet switch board

 EX8: 8 x 10GE ethernet packet switch board

 PND2: 2 x 10G bit/s packet line board


 A TDM board receives and transmits SDH signals. It converts the received optical
signals into VC signals and sends them to the cross-connect side. Then, the
universal line board converts the VC signals into OTUk optical signals that comply
with WDM system requirements for the transmission in a WDM network.
 A cross-connect board establishes physical channels for electrical signals on
service boards inside a subrack, and grooms electrical signals by collaborating with
the service boards.
 XCS: Centralized Cross-Connect Board

 UXCS: Enhanced Universal Cross Connect Board

 SXCL: 80G VC3/VC12 Low Order Universal Cross-Connect Board

 UCXCS: Universal Cluster Cross Connect Board


 OptiX OSN 1800 supports only X40/EX40/FIU/DFIU/DSFIU.
 TN11/TN12M40V01: Multiplexes 40 channels of C_EVEN signals into one main
optical signal.

 TN11/TN12M40V02: Multiplexes 40 channels of C_ODD signals into one main


optical signal.

 The attenuation adjustment range of M40V is 0dB to15dB.

 The MON port power is 10/90 of the OUT port power, that is, the MON power is
10 dB lower than the OUT port power.

 The insertion loss of M40V is 8dB.


 Basic function

 In the two-fiber bidirectional system, multiplexes a maximum of 40 channels of ITU-T


G.694.1-compliant standard-wavelength optical signals with a channel spacing of 100
GHz into one channel of multiplexed optical signals.

 In the two-fiber bidirectional system, demultiplexes one channel of multiplexed


optical signals into a maximum of 40 channels of ITU-T G.694.1-compliant standard
single-wavelength optical signals with a channel spacing of 100 GHz.

 In the single-fiber bidirectional system, multiplexes a maximum of 18 channels of


ITU-T G.694.1-compliant standard-wavelength optical signals with a channel spacing
of 100 GHz into one channel of multiplexed optical signals and demultiplexes one
channel of multiplexed optical signals into a maximum of 18 channels of ITU-T
G.694.1-compliant standard single-wavelength optical signals with a channel spacing
of 100 GHz at the same time.

 Online optical performance monitoring

 Provides an online monitoring optical interface to which a spectrum analyzer can be


connected to monitor the spectrum the main optical path without interrupting
services.

 Insertion loss: < 6.5 dB


 The ratio of MON port to OUT port: 1/99 (20dB).

 Position of the FIU board in the WDM system (normal optical power):
 Supports multiplexing /demultiplexing optical signals between the ones with the
channel spacing of 100 GHz and the other ones with the channel spacing of 50
GHz.

 Position of the ITL board in the WDM system:


 All Reconfigurable OADM units provide four indicators.

Indicator Name Color

STAT Board hardware status indicator Red, green

ACT Service active status indicator Green

PROG Board software status indicator Red, green

SRV Service alarm indicator Red, green, yellow


 The MR8V adds/drops and multiplexes eight channels of signals, and adjusts the
multiplexed input optical power of WDM-side signal and the input optical power
of each adding channel.

 Provides the interface to cascade another optical add and drop multiplexing board
to achieve expansion.
 As a type of reconfigurable optical add and drop multiplexing unit, the WSM9
board is used with the WSD9 board to implement wavelength grooming at the
nodes in the DWDM network.

 Basic function: Configures any wavelengths to any interfaces. A node on the ring
or chain network can receive any wavelengths at the local station through any
interfaces to achieve the dynamic wavelength allocation.

 TN16WSM9

 Supports signals with a 50 GHz channel spacing.

 Supports flexible grid wavelength signals. The signals have continuous n x


slice GHz spectrums (one slice is equal to 50 GHz, and n is an integer ranging
from 1 to 8) to meet the bandwidth requirements of high-rate services. When
multi-rate services are transmitted together, Gridless flexibly allocates
bandwidth to improve the bandwidth utilization efficiency.

 TN96WSM9

 Supports extended C-band wavelengths

 Supports flexible grid wavelength signals. The signals have continuous n x


slice GHz spectrums (one slice is equal to 12.5 GHz, and n = 3 to 32) to meet
the bandwidth requirements of high-rate services.Gridless flexibly allocates
bandwidth to improve the bandwidth utilization efficiency.
 As a type of reconfigurable optical add and drop multiplexing unit, the RDU9
board is used with the WSM9 board to implement the wavelength grooming at the
nodes in the DWDM network.

 Basic function: Broadcasts the signals received from the main optical path in nine
directions at the same time.
 Provides service broadcasting function, and supports the function of configurable
multiplexing any wavelengths. Any node on a ring or chain network can broadcast
the signals received from the main optical path as nine channels of the same
signals, and can input any wavelengths added locally to any port.

 Application:
 As a type of wavelength selective switching board, the DWSS20 board is used with
the optical multiplexer and demultiplexer unit and optical add and drop
multiplexing unit to implement wavelength grooming at the nodes in the DWDM
network.

 Wavelength adding: adds any wavelengths from any directions through ports AM1
to AM20 and outputs the wavelengths through the OUT port.

 Wavelength dropping: receives optical signals from the main path through the IN
port, drops wavelengths to any directions, and outputs them through ports DM1
to DM20.
 MCS0816: Dual Multicast Switching Board (Extended C-band, 8D, 16 add/drop
multiplexing paired ports)

 Note:

 The AMx or DMx optical port of the TN51MCS0816 board can be connected
only to a coherent OTU board and a protection board such as OLP, DCP, QCP.

 The wavelength services that are added from the same AM optical port can
be transmitted only to an any OUT optical port, and the same wavelength
services cannot be added to the same OUT optical port.

 The optical signals of the same wavelength can be dropped from different
DM optical ports, but one DM optical port can receive services of only one IN
optical port.

 Wavelength adding: Adds 16 any coherent wavelength signals to any optical port
among AM1 to AM16 and outputs them from any optical port among OUT1 to
OUT8. The wavelength services that are added from the same AM optical port can
be transmitted only to an any OUT optical port, and the same wavelength services
cannot be added to the same OUT optical port.

 Wavelength dropping: Receives at most 16 coherent wavelength signals from 8


line-side optical ports and grooms the received signals to any DM optical port for
wavelength dropping. The optical signals of the same wavelength can be dropped
from different DM optical ports, but one DM optical port can receive services of
only one IN optical port.
 The optical amplifier board amplifies the power of the multiplexed optical signals
to extend the transmission distance.

 Positions of EDFA and Raman boards in a WDM system


 Amplifies the input optical signals in C band. The total wavelengths range from
1529 nm to 1561 nm.

 Supports the system to transmit services over different fiber spans without
electrical regeneration.

 The OAU1 continuously adjusts the gain continuously based on the input optical
power:

 OAU100: 16dB to 25.5dB

 OAU101/OAU102: 20dB to 31dB

 OAU103: 24dB to 36dB

 OAU105: 23dB to 34dB

 OAU106: 16dB to 23dB

 OAU107: 19dB to 27dB, and supports adjustment of maximum total output


optical power.
 The RAU2 board integrates with a backward Raman module, an EDFA module and
a VOA. Compared with an EDFA, the RAU2 board improves system performance in
addition to amplifying C-band optical signals at the receive end.

 Position of the RAU2 board in the WDM system (OTM site)


 The CTU board consists of an overhead processing module, clock module,
monitoring module, communication module, CPU and control module, and power
supply module.

 Clock module: The clock module consists of the clock source selection sub-
module, IEEE 1588 clock synchronization sub-module, and active/standby
CTU board switching control sub-module. Locks the external clock source,
service clock source, or local clock source, to provide the CTU board and the
system with the synchronization clock source. Updates the clock signals
periodically to ensure the time synchronization within an NE.

 CPU and control module: Controls, monitors, and manages each functional
module on the CTU board.

 Power supply module: Powers the CTU board and supplies 3.3 V power for
the EFI, FAN, and PIU board.

 Monitoring module: Detects whether boards in the subrack are in place and
reports associated alarms to the U2000.

 Overhead processing module: Receives and processes overhead signals from


service boards. Sends the processed overhead signals to the service boards.

 Communication module: Communicates with other boards in the subrack


through the Ethernet channel and reports their data to the U2000.
 The SCC board consists of the overhead processing module, clock module,
monitoring module, communication module, CPU and control module, and power
supply module.

 CPU and control module implements the control, monitoring and management of
each functional module on the board.

 Overhead processing module sends the overhead signals to the service board.

 Monitoring module detects whether the boards are in position and reports alarms
to the U2000.

 Clock module provides the clock source for the system.

 Communication module communicates with each board, and reports the data of
other boards to the U2000.

 The power supply module for the OptiX OSN 8800 provides the entire system with
3.3 V integrated power backup to protect the 3.3 V power supply of any board in
the system. In addition, it provides power backup to the boards of which the total
power consumption is less than 60W.
 OSC boards transmit optical supervisory information between two NEs. OSC
boards provide high reliability of network monitoring because OSC boards
transmit an OSC signal using a wavelength different service wavelengths.

 Positions of OSC boards in a WDM system


 The AST2 board is independent of the SCC. When the SCC is not properly installed,
the AST2 board can still ensure the pass-through of ECC with the two optical
interfaces and monitor other stations.

 When the AST2 board works with the SFIU board, the signal wavelengths of
supervisory channel for TM1 is 1511 nm, and the signal wavelengths of supervisory
channel for TM2 is 1491 nm.

 When the AST2 board works with the FIU board, the supervisory channel signal
wavelength is 1511 nm.
 Optical protection boards provide 1+1 protection or 1:1 optical line protection for
services using their dual-fed and selective receiving or selective feeding and
selective receiving function.

 Spectrum analyzer boards support centralized monitoring of optical signals


without impacting the signal performance.

 Dispersion compensation boards compensate for dispersion accumulated during


fiber transmission of optical signals and compress the pulses of the propagated
optical signals. This enables the propagated optical signals to be restored at the
output end.

 The STG board is a type of clock board, which locks the reference clock source and
provides clock signals and frame signals to the system. The clock signals comply
with ITU-T G.813 and ITU-T G.823. The STG board can apply to the optical-layer
and electrical-layer scenarios.
 The OPM8 board provides eight ports, and each port supports optical power
monitoring.

 Detects optical power of each wavelength and reports to the SCC board.

 TN12OPM8/TN15OPM8 supports detection of OSNR for 10 Gbit/s, 40 Gbit/s, 100


Gbit/s and 200 Gbit/s signals. OSNR detection for Flexible Grid wavelengths is not
supported.The TN12OPM8/TN15OPM8 board can interoperate with the Optical
Doctor management system license.

 Operating wavelength range:1529-1561nm

 Detected accuracy for optical power:±1.5dB


 Reference answer:

1. The boards of the OptiX OSN 8800 are classified into the following types:
optical transponder unit, OTN tributary/line unit, cross-connect unit,
demultiplexer/multiplexer, fixed/reconfigurable optical add/drop unit, optical
amplifier unit, system control and communication unit, optical supervisory
channel unit, optical protection unit, optical spectrum analyzer unit, and optical
variable attenuation unit.
 NG WDM can be applied to the access layer, metro aggregation layer, metro core
layer, and backbone core layer.

 At the backbone layer, NG WDM equipment can be interconnected with metro


DWDM equipment, SDH equipment, and datacom equipment to provide a large-
capacity transmission channel for various services. The OptiX OSN 8800 is
applicable to the national backbone network and provincial backbone network for
long-distance large-capacity transmission. The OptiX OSN 8800 can meet the
requirements of ultra-large capacity and ultra-long distance transmission, and
provides a stable platform for multi-service operation and future network upgrade
and expansion.

 The OptiX OSN 8800 uses the dense wavelength division multiplexing technology
to implement multi-service, large-capacity, and transparent transmission. It
provides flexible service grooming functions. It not only implements wavelength-
granularity ROADM grooming at the optical layer, but also implements grooming
of sub-wavelength services at the ODUk-granularity (k=0/1/2/2e/3/4/flex) within
each wavelength. This greatly improves the flexibility of service grooming and
bandwidth utilization.

 The OptiX OSN 8800 can form an end-to-end OTN network with the OptiX OSN
9800 and OptiX OSN 1800. The typical application of the OptiX OSN 8800 is to
form an OTN network. In addition, in the OCS network, it can also be networked
with the NG-SDH/PTN/datacom equipment to implement a complete transmission
solution.
 NG WDM uses the L0+L1+L2 three-layer architecture. The L0 optical layer
supports wavelength multiplexing/demultiplexing and DWDM optical signal
adding/dropping. The L1 electrical layer supports cross-connection of ODUk/VC
services. The L2 layer implements Ethernet/MPLS-TP switching.

 Through the backplane bus, the system control board controls other boards. It
provides functions such as inter-board communication, service grooming between
boards, and power supply. The backplane bus includes: Control and
communication bus, electrical cross-connect bus, clock bus, etc.

 The functions of the modules in the figure are as follows:

 Optical-layer boards are used to process optical-layer services and


implements optical-layer grooming based on the λ level.

 Electrical-layer boards are used to process electrical-layer signals and


perform optical-to-electrical conversion for signals. Grooming granularities at
different levels can flexibly schedule electrical-layer signals through the
centralized cross-connect unit.

 The system control and communication board is the control center of the
equipment. It works with the network management system to manage the
boards of the equipment and realize the communication between the
equipment.

 The auxiliary interface unit provides input and output ports for clock/time
signals, alarm output and cascading ports, and alarm input/output ports.
 NG WDM supports point-to-point, chain, ring, and mesh networking modes. It can
work with other NG WDM equipment to implement a complete transport network
solution.

 Different networking modes have different application scenarios. Different


networking modes can be selected based on service requirements.

 Point-to-point networking

 Point-to-point networking is the simplest networking mode and is used for


end-to-end service transmission. Point-to-point networking is the most basic
networking. Point-to-point networking is generally used for common voice
services, data private line services and storage services.

 Chain networking

 When some wavelengths need to be added or dropped locally and other


wavelengths continue to be transmitted, a chain network consisting of optical
add/drop multiplexing devices is required. The service type of a chain
network is similar to that of a point-to-point network. It is more flexible and
can be used for point-to-point services. It can also be used for convergence
and broadcast services in simple networking.
 Ring networking

 The security and reliability of the network are important to the network
service quality. To improve the protection capability of the transmission
network, most of the metropolitan DWDM networks adopt the ring topology.
The ring topology is applicable to the most widely used services, such as
point-to-point services, convergence services, and broadcast services. A ring
network can also derive various complex network structures. For example:
Two tangent rings, two intersecting rings, and ring with chain.

 Ring with chain topology

 Tangent ring networking

 Intersecting ring networking


 Mesh networking

 In a mesh network, a large number of nodes are connected through direct


routes. Therefore, the mesh network has no node bottleneck. In addition, it
provides the function of routing bypass to ensure service continuity when a
device fails. In a mesh network, multiple routes are available between two
nodes, and service transmission reliability is high. This is one of the main
networking modes of an intelligent optical network. This networking mode is
flexible and easy to expand. It is widely used in ASON networks.
 Principles for configuring optical multiplexer and demultiplexer boards:

 For an 80-channel OTM site, if M40V/D40 is used for adding/dropping


wavelengths, ITL boards must be configured.

 Principles for configuring optical amplifier boards:

 According to the optical power budget principle, the optical amplifier board
needs to be configured according to the actual scenario.

 Principles for configuring optical supervisory channel boards:

 The SC1 and ST2 boards control the receiving and transmitting of optical
supervisory signals, and transmit and extract the overhead information of the
system.

 To implement the IEEE 1588v2 synchronous clock processing function,


transparent transmission of two FE signals, and line fiber quality monitoring,
you need to use the ST2 board.
 The OTM equipment is applied to the terminal station.

 In the receive direction: The line signals received from the west are separated
from the optical supervisory signals and the main channel optical signals. The
optical supervisory signals are sent to the optical supervisory unit for
processing. The main channel optical signals are amplified and then sent to
the optical demultiplexing unit, after being separated, all wavelengths are
transmitted to the corresponding wavelength conversion unit and then sent
to the local client equipment.

 The signal flow in the transmit direction is the reverse process in the receive
direction.

 At an OTM site, the optical power in the transmit direction of the OTU board is flat.
Therefore, the M40V board is used.

 In addition, reinforce the FOA at the receive end of the OTU client side and the
WDM side of the OTU board, and adjust the optical power between the receiver
sensitivity and the overload.
 Optical and electrical layers are separated. It is recommended that configure
optical-layer boards on OptiX OSN 8800 universal platform subracks and configure
electrical-layer boards on OSN 8800 T32 enhanced subracks.
 The OLA equipment is used in the optical amplifier station to amplify the optical
signals transmitted in the two directions.

 The optical supervisory signals and main channel optical signals are
separated from the received line signals. The optical supervisory signals are
sent to the optical supervisory unit for processing. The main channel optical
signals are amplified by the optical amplifier unit and then multiplexed with
the processed optical supervisory signals, it is sent to the optical line for
transmission.

 The optical fiber connection at the OLA site is simple. The OAU1 board is used as
the optical amplifier board.

 The optical supervisory channel (OSC) board at the OLA site uses the ST2 board.
 OptiX OSN 8800 UPS:

 Configuration principle of the SCC board: Each site needs to be configured


with the SCC board. By default, the control board is configured with 1+1
protection.

 Configuration principle of the monitoring channel board: The monitoring


channel board is preferentially configured on the IU16 board, and one ST2
board can be configured.

 Configuration principle of the amplifier: According to the optical power


budget principle, the optical amplifier board needs to be configured
according to the actual scenario. The optical amplifier is installed on both
sides of the subrack.

 The FIU boards are installed on both sides of the subrack, as shown in figure
IU3 and IU15. Then, the optical amplifiers are inserted in sequence.

 Configuration principle of optical protection boards: OLP boards must be


configured when optical line protection is configured.
 FOADM sites can be classified into the following types:

 Parallel FOADM (or referred to as back-to-back OTM) adopts M40V/D40.

 Serial FOADM uses the MR2/MR4/MR8/MR8V.


 Parallel FOADM sites (back-to-back OTM sites) are generally used at intermediate
sites.

 In the receive direction, the optical supervisory signals and main channel
optical signals are separated from the line signals received from the west. The
optical supervisory signals are sent to the optical supervisory unit for
processing. The main channel optical signals are amplified and then sent to
the optical demultiplexing unit.

 Some wavelengths are separated into the wavelength conversion unit and
then sent to the local client equipment. Other wavelengths are not
demultiplexed to the local. After passing through, the wavelengths are
multiplexed with the locally inserted wavelengths through the optical
multiplexing unit, and then amplified. Finally, the signals are multiplexed with
the processed optical supervisory signals and transmitted to the line for
transmission. That is, when the service needs to pass through, the west to
east pass-through service is used as an example. Directly connect the west
D40 to the east M40V fiber patch cord. If there are multiple directions, use
the ODF patch cord. When services need to be added or dropped, the
M40V/D40 is connected to the OTU board.

 The signal flow in the transmit direction is the reverse process in the receive
direction.

 When the service needs to be regenerated, the OTU board with the regeneration
function can be connected between D40 and M40V.
 You can also use the OSN 8800T32 general/enhanced subrack or OSN 8800T64
subrack.
 The FOADM site that uses the MR2/MR4/MR8/MR8V series TTF boards is called
the serial FOADM. The serial FOADM equipment processes the optical signals in
the two transmission directions respectively.

 The MR2/MR4/MR8/MR8V board can add/drop 2/4/8 wavelengths, which can be


cascaded. A maximum of eight MR2 boards can be cascaded. That is, a maximum
of 16 wavelengths can be added or dropped. If the number of wavelengths is
greater than 16, you are advised to use the parallel FOADM(M40V/D40).

 If a FOADM site consists of two MR4 boards:

 In the receive direction, the FIU board separates the optical supervisory
signals and the main channel optical signals from the received line signals,
and sends the optical supervisory signals to the optical supervisory unit for
processing.

 The optical signals of the main channel are amplified by the optical amplifier
and then sent to the MR4. Some wavelengths are separated into the optical
transponder unit and then sent to the local client equipment.

 After passing through, the wavelengths are multiplexed with the locally
inserted wavelengths, then amplified, multiplexed with the processed optical
supervisory signals, and then transmitted to the line for transmission.

 The signal flow in the transmit direction is the reverse process in the receive
direction.
 As shown in the figure, the OSN 8800 T32 enhanced subrack is powered by a
separate PIU board based on the UXCH symmetry of the IU9 and IU10 boards. In
this case, the east and west directions do not need to be configured with two
subracks. You can insert the boards that are added and dropped in the west
direction into the left area of the UXCH, and insert the boards that are added and
dropped in the east direction into the right area of the UXCH.

 The configuration principles of other boards are the same as those of the parallel
FOADM (M40V/D40).
 ROADM site model

 ROADM sites consisting of RDU9 and WSM9 boards (40/80 wavelengths)

 ROADM sites consisting of WSD9 and WSM9 boards (40/80 wavelengths)

 ROADM site consisting of WSMD2/WSMD4/WSMD9 boards


 As a type of reconfigurable optical add and drop multiplexing unit, the WSMD9
board is used with the optical multiplexer and demultiplexer unit and optical add
and drop multiplexing unit to implement wavelength grooming at the nodes in the
DWDM network.

 Optical interfaces AM1-AM8, DM1-DM8, EXPI, and EXPO on the WSMD9 board
also can be used to cross-connect boards in other dimensions.

 When coherent OTU or line boards are used, the M40V and D40 boards can be
removed. The WSMD9 boards can be directly connected to the coherent OTU or
line boards because the coherent OTU or line board supports wavelength selection.
(Applicable only to 50GHz signal interval)
 As shown in the figure, the optical-layer and electrical-layer subracks are separated.
In this case, electrical-layer boards are installed in the OSN 8800 T32 enhanced
subrack for electrical-layer grooming.

 The WSM9 and RDU9 boards can be replaced with WSMD9 boards.
 Reference answer:

 The 80-channel signal flow at the OLA site is the same as the 40-channel signal flow. For
details, see page 17.
 NG WDM system supports the 40/80×10Gbit/s transmission solution, which is
widely used on the live network. Generally, the optical layer and electrical layer
grooming are used together.
 For the 10G wavelengths that need to pass through the local station, use the
following methods:

 When the service signal quality is good, that is, the OSNR value is high, the
optical-layer grooming can be directly used, and the optical-layer pass-
through of the service can be configured directly by using the ROADM.

 When the service signal quality is poor, that is, the OSNR value is low, the
signal can be electrically regenerated. Two 10Gbit/s line units need to be
configured to use the ODU2 or 4×ODU1 electrical cross-connection at the
electrical layer to configure the regeneration service signal.
 The newly added and upgraded 40G wavelengths can be accessed to the
multiplexer unit at the same time with the 10G wavelengths on the live network.
This does not affect the existing and new services. During system design, pay
attention to dispersion management and OSNR calculation.
 For the 40G wavelengths that need to pass through the local station, use the
following methods:

 When the service signal quality is good, that is, the OSNR value is high, the
optical-layer grooming can be directly used, and the optical-layer pass-
through of the service can be configured directly by using the ROADM.

 When the service signal quality is poor, that is, the OSNR value is low, the
signal can be electrically regenerated. In this case, two 40Gbit/s line units
need to be configured to use the ODU3 or 4×ODU2 electrical cross-
connection to configure the regeneration service signal.

 For 40G electrical-layer grooming, NG WDM equipment supports ODU3


granularity electrical-layer grooming, and 40G OTU3 services can be divided into
four ODU2 services for grooming. For 40G service regeneration boards, four ODU2
signals are concatenated to the transmit side and then integrated into OTU3 for
transmission.
 With the development of mobile networks to LTE, the popularity of intelligent
terminals, and the development of new services, especially video services, the
transmission capability of the transmission network needs to be further improved.
Advanced technologies such as ePDM-QPSK, ePDM-BPSK, and coherent reception
provide large-capacity and ultra-high-bandwidth 100G and 40G coherent
transmission systems for transmission networks.

 The coherent transmission system has the following features:

 Large capacity

 Long distance

 Low dispersion

 Low latency

 Supports hybrid transmission of 10G/40G/100G signals

 The current 40G WDM transmission system is gradually upgraded to a 100G


transmission system as service requirements expand. 40G and 100G hybrid
transmission and coherent hybrid transmission are essential. NG WDM supports
40G and 100G hybrid transmission and coherent hybrid transmission, the smooth
upgrade of the system is ensured. Because of the advanced coherent detection
technology, no DCM or DCU board is required for CD and PMD compensation.
 100G direct encapsulation technology, which maps 100G services to OTU4 services
for transmission. 100G services can be transmitted over a single wavelength. The
100G reverse encapsulation technology can also be used to transmit 100G services
on a 10G or 40G DWDM network.
 In general, the NG WDM equipment adapts to the grooming of high-rate services,
and adapts to the perfect combination of the backbone layer and gigabit router.
This feature provides a good support for the all-IP network.
 To better adapt to the future multidimensional structure of the NG WDM and
better integrate with the IP/MPLS, the NG WDM provides more protection
schemes than the traditional WDM.

 The OptiX NG WDM provides protection for different service layers, effectively
supporting the application of the DWDM system in the OTN network.
 Compared with a traditional WDM network, an ASON-empowered WDM network
has advantages in service configuration, bandwidth utilization, and protection. In
the traditional transmission network, the WDM transmission equipment functions
as fibers. Currently, the WDM transmission equipment also carries services, posing
more requirements on the operability of the WDM equipment. The traditional
network faces the following challenges:

 Service configurations are complex, and it is time-consuming to expand


system capacity and provision services.

 Bandwidth utilization is low and inefficient. For example, on a ring network,


half of the bandwidth is always vacant.

 Limited protection modes are applicable, among which the self-healing


protection has poor performance.

 ASON enhances the network connection management and fault recovery


capabilities by introducing signaling to the traditional transmission network and
providing a control plane. It supports end-to-end service configurations and
different service level agreement (SLA) levels. It supports Diamond ASON Trail,
Silver ASON Trail and Copper ASON Trail.

 Huawei ASON applies LMP as the link management protocol, OSPF-TE as the
routing protocol, and RSVP-TE as the signaling protocol.

 WDM ASON includes electrical-layer ASON and optical-layer ASON. The use of
ROADM is a prerequisite for implementing optical-layer ASON.
 The optical power budget is used to determine the regeneration section distance.
The process of optical power budget is actually the process of configuring
amplifiers. The optical power at the transmit end meets the incident optical power
requirement, and the optical power at the receive end meets the working range of
the receiver. In a DWDM system, power loss introduced by line fibers, optical
modules, and optical components needs to be compensated by using optical
amplifiers (EDFA or Raman amplifiers). When designing a network, you need to
calculate the fiber loss of the entire link and consider the system margin (3 dB
margin is considered when there is no special requirement for the project). Then,
you need to configure the amplifier first and then adjust the attenuation according
to the configuration of the dispersion compensation module.

 The fiber attenuation coefficient is subject to the actual test value in the project.

 If the attenuation coefficient is accurate, the line power redundancy is generally 3


dB. In this case, the insertion loss of the FIU board is included.

 During network design, if the actual attenuation value of each optical cable line is
known, add the system engineering margin based on the actual test value.
Generally, the actual attenuation value is 3 dB. If the OSC mode is used, the extra
power loss of the FIU needs to be considered. Generally, the optical power is 1 dB
(FIU loss at both ends). If only the ESC board is used, the power budget of FIU
does not need to be considered.
 In the case of no dispersion compensation, each regeneration section should be
less than the dispersion-limited distance. If the regenerator section is greater than
the dispersion limited distance, dispersion compensation should be performed.

 Dispersion limited distance (km) = Dispersion tolerance (ps/nm) / Dispersion


coefficient (ps/nm.km). The dispersion tolerance depends on the laser (light
source). Different rates and different quality of light sources have different
dispersion tolerance values. The dispersion coefficient depends on the fiber.

 Currently, two types of DCM modules are used on the live network: G.652 (SMF)
fibers and G.655 (LEAF/TRUEWAVE) fibers.

 G.652: The typical dispersion coefficient of a single-mode fiber (SMF) is


17ps/nm.km. However, when converting the dispersion tolerance of an OTU
to a dispersion limited distance, you need to obtain the dispersion value of
the fiber (20ps/nm.km).

 G.655: The typical dispersion coefficient of a single-mode fiber is


4.5ps/nm.km. However, when converting the dispersion tolerance of an OTU
to a dispersion limited distance, you need to obtain the dispersion value of
the fiber (6ps/nm.km).
 The Optical Signal to Noise Ratio (OSNR) is the most important indicator for
measuring the performance of a DWDM system. The optical signal-to-noise ratio
refers to a ratio of a signal optical power to a noise optical power in a transmission
link.

 OSNR (dB) =10 x lg (P signal (mW) /P noise (mW)) =P signal (dBm) -P noise (dBm)

 When the OSNR decreases to a certain extent, the performance of the system is
seriously endangered. For a DWDM system with multiple cascaded line optical
amplifiers, when optical amplifiers are used to compensate for the line loss, the
radiation noise of the amplifier is introduced. The optical power of the noise is
mainly caused by the accumulated spontaneous emission noise of the amplifier. As
a result, the optical signal-to-noise ratio decreases, the transmission performance
deteriorates.

 According to the definition of ITU-T: The signal power used for calculating the
optical signal-to-noise ratio is the total signal power (40-channel system) in the
0.8nm bandwidth, and the noise power is the total noise power in the 0.1nm
bandwidth.
 In the DWDM system, the decrease of the optical signal-to-noise ratio is mainly
caused by the introduction of ASE noise by each OA unit. The noise introduced on
the line can be ignored during planning.

 On the optical line, the optical power of the signal and noise decreases with the
attenuation of the optical fiber.

 As shown in the figure, the noise introduced by each OA board is the same, but
the OSNR decreases after the first OA board is used.
 The non-linear effect refers to the effect caused by the nonlinear polarization of
the medium, including the optical harmonic, frequency multiplication, stimulated
Raman scattering, dual-photon absorption, saturation absorption, self-focusing,
and self-distracting. In essence, all media are non-linear. In general, the nonlinear
characteristics are small and cannot be demonstrated. When the incident optical
power of the optical fiber is small, the fiber is linear. When the optical amplifier
and high-power laser are used in the optical fiber communication system, the
nonlinear characteristics of the optical fiber become more and more obvious. The
main reason is that the optical signals in the single-mode optical fiber are
restricted in the mode field, and the valid area of the single-mode optical fiber is
very small (for example, the effective area of the G.652 fiber is about 80μm2).
Therefore, the optical power density is very high and the high optical power can
maintain a long distance.

 Therefore, the nonlinear effect is closely related to the optical power. To prevent
the non-linear effect, the total optical power on the optical fiber should be less
than 20 dBm (the single-wavelength optical power is less than 4dBm).

 The Super WDM technology can suppress the non-linear effect in the transmission
system.
 Reference answer:

1. ASE noise is introduced because of the use of EDFA.

2. Coherent boards have a large dispersion tolerance.


 FOADM can be divided into two modes: Serial and parallel.

 The parallel FOADM mainly includes the M40V+D40, which is also called the
back-to-back OTM solution.

 The serial FOADM mainly includes the MR2+MR2, MR4+MR4 and MR8+MR8
solutions. This course does not consider the CWDM and single-fiber
bidirectional CWDM solutions.
 The FOADM that is constructed in back-to-back OTM mode uses the DMUX to
demultiplex all wavelengths to the local end. Then, certain wavelengths can be
directly passed through according to the wavelength planning. Some wavelengths
can be dropped to the local end, and some wavelengths can pass through
electrical regeneration.

 The back-to-back OTM networking mode is flexible and allows each wavelength to
be independently configured. Therefore, during the expansion of the local
add/drop wavelengths, services of other wavelengths are not interrupted.

 When the FOADM in back-to-back OTM mode is applied in the ring network, it can
block the extra noise generated by the EDFA, thus avoiding the ring network self-
excitation caused by the ring of the EDFA noise.

 The FOADM in back-to-back OTM mode has many disadvantages. For example,
when there are few services at the early stage of network construction, the MUX
and DMUX need to be configured. The cost is high and the power budget is large.
Therefore, the OADM in this mode is suitable for the central node with a large
number of initial add/drop wavelengths.
 MR2 is the most common serial FOADM board. In the typical application scenario,
the east and west MR2 respectively complete the upstream and downstream
directions of the east and west directions. The MR2 consists of two dielectric thin
film filters (TFF) corresponding to different wavelengths, after the optical multiplex
section signal passes through the TFF, the wavelength channel corresponding to
the center wavelength of the TFF is filtered and dropped, and other wavelengths
are reflected by the TFF and continue to be transmitted downwards. The two east
and west MR2 boards are symmetrically placed to ensure that the east and west
services are processed in the upstream and downstream directions. The advantage
of the MR2 solution is that the initial configuration cost is low and the
configuration is simple. The TFF combination can be used for the wavelengths to
be dropped.

 The boards similar to MR2 are MR4, MR8, and MR8V. If there are many initial
wavelengths, for example, more than two wavelengths are added or dropped, MR4,
MR8, or MR8V is used. If there are not many initial wavelengths, MR2 can be used.
In the future, MR2 needs to be extended, the MR2 can be connected in series.
however, it is obvious that the insertion loss caused by serial MR2 changes the
power budget of the site. in addition, pass-through services are interrupted during
system expansion.
 Principles for configuring optical multiplexer and demultiplexer boards:

 For an 80-channel OTM site, if M40V/D40 is used for adding/dropping


wavelengths, ITL boards must be configured.

 Principles for configuring optical amplifier boards:

 According to the optical power budget principle, the optical amplifier board
needs to be configured according to the actual scenario.

 Principles for configuring optical supervisory channel boards:

 The SC1 and ST2 boards control the receiving and transmitting of optical
supervisory signals, and transmit and extract the overhead information of the
system.

 To implement the IEEE 1588v2 synchronous clock processing function,


transparent transmission of two FE signals, and line fiber quality monitoring,
you need to use the ST2 board.
 Making use of the ROADM technology, the U2000 and Web LCT software adjusts
the status of wavelengths (add, drop or pass-through) to realize remote and
dynamic adjustment of wavelength status. The adjustment of a maximum of 80
wavelengths is supported.
 Wavelength selective switching means that any wavelength received by the input
port can be arbitrarily exchanged to an output port.

 In the WSS, a wavelength is demultiplexed first after being input through the input
port, and each wavelength channel is transmitted to a wavelength routing device.
The implementation mechanism of the wavelength routing device is as follows:
Based on the MEMS or based on the liquid crystal technology, the wavelength
routing device cross-connects the wavelength to the multiplexer corresponding to
the designated egress according to the wavelength routing information, and
multiplexes the wavelengths that are cross-connected to the port into one output
through a multiplexer, complete the wavelength selection and cross-connection
process.

 Each wavelength path of the WSS has a VOA, which controls the optical signal
power. Therefore, the wavelength optical power control function can also be
implemented.
 Reconfigurable optical add/drop multiplexer (ROADM) boards add/drop any single
or multi-wavelength signals to/from a multiplexed signal, and route the signals to
any port in any order, achieving flexible wavelength grooming in multiple
directions. These boards apply to DWDM systems.
 The ROADM site consisting of the RDU9 board and the WSM9 board can add/drop
all dynamic wavelengths in the ring and support inter-ring expansion. A maximum
of eight dimensions can be extended. The RDU9 board is mainly used to
implement the demultiplexing function. Each wavelength dropping port can be
connected to the demultiplexing unit. The broadcast signals are demultiplexed by
the demultiplexing unit and then output to the OTU board. The WSM9 board is
mainly used to implement the dynamic and configurable multiplexing function
from any wavelength to any port. Any node on the ring or chain network can be
configured with any wavelength. Any wavelength can be input to any port,
complete dynamic allocation is achieved in wavelength allocation. The ROADM
equipment consisting of the RDU9 and WSM9 boards can be applied to the central
site or the edge site. The advantage is that the capacity expansion is flexible and
convenient without interrupting services and the operation cost is low. In this case,
the NMS software can be used to adjust the wavelength adding/dropping and
pass-through status. In this way, the wavelength status can be dynamically
adjusted dynamically.
 The NG WDM supports the function of adding and dropping all dynamic
wavelengths in the ring through the multi-dimension ROADM function of the WSS.
In addition, the NG WDM supports the inter-ring expansion and realizes the
wavelength grooming in a maximum of eight dimensions. The WSD9 board is
mainly used to realize the dynamic and configurable demultiplexing of any
wavelength to any port. Any node on the ring or chain network can output any
wavelength and allocate any wavelength to any port, complete dynamic allocation
is achieved in wavelength allocation. The WSM9 board is mainly used to
implement the dynamic and configurable multiplexing function from any
wavelength to any port. Any node on the ring or chain network can be configured
with any wavelength. Any wavelength can be input to any port, complete dynamic
allocation is achieved in wavelength allocation. The ROADM site consisting of the
WSD9 and WSM9 boards can be applied to the central site or the edge site. The
advantage is that the capacity expansion is flexible and convenient without
interrupting services and the operation cost is low. In this case, the NMS software
can be used to adjust the wavelength adding/dropping and pass-through status.
In this way, the wavelength status can be dynamically adjusted dynamically.
 The NG WDM supports the function of adding and dropping all wavelengths in a
ring through the multi-dimension ROADM function based on the WSMD4 board.
In addition, the NG WDM equipment supports inter-ring expansion and supports a
maximum of four dimensions of optical wavelength grooming. The WSMD4 board
is mainly used to realize the dynamic and configurable multiplexing and
demultiplexing of any wavelength to any port. Any node on the ring or chain
network can input any wavelength from any port, in addition, any wavelength is
allocated to any port, and the wavelength allocation is fully dynamically allocated.
The ROADM equipment works with the WSMD4 board to add or drop all channels
of the C-band even wavelengths and odd wavelengths. The ROADM configured
with the WSMD4 board can be used at the central site or at the edge site. The
advantage is that the capacity expansion is flexible and convenient without
interrupting services and the operation cost is low. In this case, the NMS software
can be used to adjust the wavelength adding/dropping and pass-through status.
In this way, the wavelength status can be dynamically adjusted dynamically.
 In a directioned scenario, the current path cannot be adjusted flexibly. If the
current path must be adjusted, a site visit is required to adjust the fiber
connections for the network.

 A directioned scenario applies to non-ASON networks.


 On a non-ASON network, the current path cannot be automatically adjusted in the
directionless scenario. When services are adjusted or the protection path is used in
case of a faulty working path, manually configure optical cross-connections to
achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection.
 Colored add/drop ports(fixed wavelength) have the advantages of lower insertion
loss and lower cost. If new wavelengths need to replace the existing wavelengths,
a site visit is necessary to connect the line card/OTU colored port to the matching
add/drop port.

 On an ASON network, the services can be rerouted only on the same wavelength,
so the wavelengths may be blocked.
 Colorless add/drop ports(tunable) allow remotely provisioned reconfigurability of
the ROADM. However, the OTU/line boards must be installed in the subrack in
order to automatically provision new services. If the OTU/line boards are not
physically present in the subrack, a site visit is necessary to install the required
boards for the new services.

 On an ASON network, if a wavelength-tunable OTU or line board is used in the


colorless scenario, service wavelengths can be flexibly converted during rerouting
to avoid a wavelength congestion.
 The flexible grid technique is compatible with existing fixed spectra. For a 40-
wavelength system, the channel spacing is 100 GHz and each channel occupies 8
slices. For a 80-wavelength system and 96-wavelength system, the channel spacing
is 50 GHz and each channel occupies 4 slices.

 Compared with the current technique using fixed spectra, the flexible grid
technique improves spectrum utilization. A 400 Gbit/s signal requires 75 GHz
bandwidth. When fixed grid is used, 100 GHz (50 GHz x 2) spectral width is used.
After the flexible grid technique with a 12.5 GHz slice spacing is used, only 75 GHz
spectral width (12.5 GHz x 6) is required, saving 25 GHz bandwidth.
 NG WDM ROADM includes intra-ring and inter-ring scheduling. WSS+RMU/RDU
and WSS+WSS can be upgraded from 2-degrees nodes to 20-degrees ROADM
nodes, that is, inter-ring grooming. It also supports optical-layer grooming based
on C-band 80 wavelength.

 The ROADM realizes the reconfigurable wavelength by blocking or cross-


connecting the wavelength. In this way, the static wavelength resource allocation
becomes flexible and dynamic allocation. The ROADM technology works with the
U2000 to adjust the wavelength adding/dropping and pass-through status to
dynamically adjust the wavelength status. It supports a maximum of 80
wavelengths and supports flexible optical-layer grooming from 1 to 20 degrees.
 This configuration generally applies to a terminal node. Services are not
interrupted during expansion.

 The WSMD4 in the figure can be replaced with the RDU9+WSM9, WSD9+WSM9,
WSMD2 or WSMD9.
 This section uses the west service as an example to describe the signal flow.

 Local services are added to the west WSMD4 board through the AM1 port
and then transmitted to the west through the OUT port. Services from the
east pass through the AM4 port on the west WSMD4 board and head west.

 In a colored & directioned scenario, to ensure that local services of NE1 can
be transmitted in optical directions west and east, one group of M40V+D40
must be configured for each optical direction. In each optical directions, the
WSMD4 board connects to one M40V board and one D40 board.
 Services on NE1 can be transmitted along paths in direction west or east.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure proper service
transmission on NE1. In the colored scenario, only the same wavelength can
be used for service rerouting.

 In this scenario, to cross-connect local services on NE1 in direction west or east,


only one group of M40V+D40 is required.
 The WSMD4 in the figure can be replaced with the RDU9+WSM9, WSD9+WSM9,
or WSMD9.

 Local services are added to the WSMD4 through the AM4 port and then
transmitted to the south through the OUT port. Services from the west and east
pass through the AM1 and AM3 ports on the WSMD4 and head south.

 In this scenario, to cross-connect services on NE1 in directions west, south, and


east, three groups of M40V+D40 must be configured. In each optical directions,
the WSMD4 board connects to one M40V board and one D40 board.
 Services on NE1 can be transmitted along paths in direction west, north, or east.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure proper service
transmission on NE1. In the colored scenario, only the same wavelength can
be used for service rerouting.

 In this scenario, to cross-connect local services on NE1 in direction west, north, or


east, only one group of M40V+D40 is required.
 Services on NE1 can be transmitted along paths in direction west, north, or east.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure proper service
transmission on NE1. If a wavelength-tunable OTU or line board is used in
the colorless scenario, service wavelengths can be flexibly converted during
rerouting to avoid a wavelength congestion.

 A WSM9+WSD9 combination can be used for colorless scenarios in a non-


coherent system.
 As for the colorless scenario in a coherent system, when the WSMD4, WSMD9,
RDU9 or TD20 board is used to drop services, it is not necessary to configure a
DEMUX board. The coherent OTU board uses the coherent receiver technology
and therefore can correctly select the wavelength to be dropped at the local end
among the multiplexed signals. The above uses the TM20+TD20 combination to
describe the colorless scenario.
 Local services are added to the WSMD4 board through the AM4 port and then
transmitted to the south through the OUT port. Services from the west, north, and
east pass through the AM1, AM2, and AM3 ports on the WSMD4 board, heading
for the south.

 In this scenario, to cross-connect services on NE1 in directions west, north, east,


and south, four groups of M40V+D40 must be configured. In each direction, a
M40V+D40 combination must be configured.
 Services on NE1 can be transmitted along paths in direction west, north, east, or
south.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure correct service
transmission on NE1. In the colored scenario, only the same wavelength can
be used for service rerouting.

 In this scenario, to cross-connect local services on NE1 in direction west, north,


east, or south, only one group of M40V+D40 is required.
 Services on NE1 can be transmitted along paths in direction west, north, east, or
south.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure correct service
transmission on NE1. If a wavelength-tunable OTU or line board is used in
the colorless scenario, service wavelengths can be flexibly converted during
rerouting to avoid a wavelength congestion.

 A WSM9+WSD9 combination can be used for colorless scenarios in a non-


coherent system.
 As for the colorless scenario in a coherent system, when the WSMD4, WSMD9,
RDU9 or TD20 board is used to drop services, it is not necessary to configure a
DEMUX board. The coherent OTU board uses the coherent receiver technology
and therefore can correctly select the wavelength to be dropped at the local end
among the multiplexed signals. The above uses the TM20+TD20 combination to
describe the colorless scenario.
 Services on NE1 can be transmitted along paths in nine directions.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure correct service
transmission on NE1. In the colored scenario, only the same wavelength can
be used for service rerouting.

 In this scenario, to cross-connect local services on NE1 in direction 1 to direction 9,


only one group of M40V+D40 is required.
 Services on NE1 can be transmitted along paths in nine directions.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure correct service
transmission on NE1. If a wavelength-tunable OTU or line board is used in
the colorless scenario, service wavelengths can be flexibly converted during
rerouting to avoid a wavelength congestion.

 A WSM9+WSD9 combination can be used for colorless scenarios in a non-


coherent system.
 As for the colorless scenario in a coherent system, when the WSMD4, WSMD9,
RDU9 or TD20 board is used to drop services, it is not necessary to configure a
DEMUX board. The coherent OTU board uses the coherent receiver technology
and therefore can correctly select the wavelength to be dropped at the local end
among the multiplexed signals. The above uses the TM20+TD20 combination to
describe the colorless scenario.
 Services on NE1 can be transmitted along paths in twenty directions.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure correct service
transmission on NE1. In the colored scenario, only the same wavelength can
be used for service rerouting.

 In this scenario, to cross-connect local services on NE1 in direction 1 to direction


20, only one group of M40V+D40 is required.
 Services on NE1 can be transmitted along paths in twenty directions.

 To adjust the current path (for example, when services are adjusted or the
protection path is used in case of a faulty working path), manually configure
optical cross-connections to achieve flexible service grooming.

 On an ASON network, the rerouting function automatically finds a path and


automatically creates an optical cross-connection to ensure correct service
transmission on NE1. If a wavelength-tunable OTU or line board is used in
the colorless scenario, service wavelengths can be flexibly converted during
rerouting to avoid a wavelength congestion.

 A WSM9+WSD9 combination can be used for colorless scenarios in a non-


coherent system.
 As for the colorless scenario in a coherent system, when the WSMD4, WSMD9,
RDU9 or TD20 board is used to drop services, it is not necessary to configure a
DEMUX board. The coherent OTU board uses the coherent receiver technology
and therefore can correctly select the wavelength to be dropped at the local end
among the multiplexed signals. The above uses the TM20+TD20 combination to
describe the colorless scenario.
 Take 2-degree flexible ROADM as example. Flexible grid wavelengths are received,
and the bandwidth of the wavelengths is not fixed to 50 or 100 GHz but can be
configured. Flexible ROADM allocates different bandwidths for different signals
and grooms the signals to the specified direction based on network configurations.

NE1

NE2

OA
OA
W ROADM ROADM E
OA OA
E

NE1

NE3 AM1 DM1 AMxDMx


ROADM
W IN OUT

Flexible Grid Wavelength


50GHz 37.5GHz 75GHz
200G

400G

NE4
100G
100G

Wavelength

West add&drop signal flow East add&drop signal flow

Local wavelength add&drop to/from any direction

 The following boards support flexible grid wavelength signals: TN13TM20,


TN15TM20, TN97TM20, TN16WSD9, TN96WSD9, TN16WSM9, TN96WSM9,
TN12WSMD4, TN13WSMD4, TN12WSMD9, TN15WSMD9, and DWSS20.
 Reference answer:

 Reconfigurable optical add/drop multiplexer (ROADM) boards add/drop any


single or multi-wavelength signals to/from a multiplexed signal, and route
the signals to any port in any order, achieving flexible wavelength grooming
in multiple directions. The main boards are RDU9, RMU9, ROAM, TD20, TM20,
WSD9, WSM9, WSMD2/4/9 and DWSS20.
 ClientLP-n: the logical client-side port of a board in compatible mode, for example,
201(ClientLP1/ClientLP1)-1, where 201(ClientLP1/ClientLP1) indicates the port
number and -1 indicates the channel number.

 ODUkLP-n: the logical ODUk port of a board in compatible mode, for example,
161(ODU0LP1/ODU0LP1)-2, where 161(ODU0LP1/ODU0LP1) indicates the port
number and -2 indicates the channel number.

 RX/TX-n: the logical client-side port of a board in standard mode, for example,
RX2/TX2-2, where RX2/TX2 indicates the port number and -2 indicates the channel
number.

 n(INn/OUTn)-OCH:1-ODUk:m-ODUp:q: the ODUk-level logical port of a board in


standard mode, from which you can learn the service mapping path. The service
mapping paths are different in the following ODU timeslot configuration modes:
Assign consecutive and Assign random.

 In the Assign consecutive mode, level-by-level service mapping is performed


from lower rates to higher rates, for example, ODU0->ODU1->ODU2. In this
example, the logical port is represented as 1(IN1/OUT1)-OCH:1-ODU2:1-
ODU1:2-ODU0:1, which means the first ODU0 in the second ODU1 of the first
ODU2 on optical port 1.

 In the Assign random mode, cross-level service mapping is performed from a


low rate to a high rate, for example, ODU0->ODU2. In this example, the
logical port is represented as 1(IN1/OUT1)-OCH:1-ODU2:1-ODU0:1, which
means the first ODU0 in the second ODU2 on optical port 1.
 As for the same ODUk service granularity, tributary boards (standard/compatible)
can interconnect with line boards (standard/compatible).

 As for the same ODUk service granularity and line rate, line boards
(standard/compatible) can interconnect with each other.
 Note: On the U2000, the subrack layout diagram displays different names of the
board in different modes (standard and compatible). For example, the name of the
TN52ND2 board in standard mode is displayed as TN52ND2(STND), and the name
of the TN52ND2 board in compatible mode is displayed as TN52ND2.
 Working mode of the port on the TOA board:

 ODU0 non-convergence mode(Any->ODU0)

 ODU1 non-convergence mode(Any->ODU1)

 ODU1 convergence mode(n*Any->ODU1)

 ODU1_ODU0 mode(OTU1->ODU1->ODU0)

 ODUflex non-convergence mode(Any->ODUflex)

 TOA converts signals as follows:

 8x(125Mbit/s~1.25Gbit/s信号)<->8xODU0

 8x(1.49Gbit/s~2.67Gbit/s信号)<->8xODU1

 8x(125Mbit/s~2.5Gbit/s信号)<->(1~8)xODU1

 8xOTU1<->16xODU0

 5x3G-SDI/3G-SDIRBR<->5xODUflex

 4xFC400/FICON4G<->4xODUflex
 TDX converts signals as follows:

 2x10GE LAN<->2xODU2e

 TN52TDX: 2x10GE LAN/10GE WAN/STM-64/OC-192/OTU2<->2xODU2

 2x10GE LAN/OTU2e<->2xODU2e

 TN53TDX/TN57TDX: 2x10GE LAN/10GE WAN/STM-64/OC-


192/OTU2/FC800<->2xODU2

 2x10GE LAN/OTU2e/FC1200<->2xODU2e

 2xFC800<->2xODUflex
 TEM28 board supports electrical-layer cross-connection, Layer 2 switching and
QinQ.
 ND2 board converts signals as follows:

 TN52ND2T01/TN52ND2T02/TN52ND201M01:

 16 x ODU0/8 x ODU1/2 x ODU2<->2 x OTU2

 2 x ODU2e<->2 x OTU2e

 TN52ND2T04/TN53ND2/TN57ND2:

 16 x ODU0/8 x ODU1/2 x ODU2/4 x ODUflex<->2 x OTU2

 2 x ODU2e<->2 x OTU2e

 Supports hybrid transmission of the ODU0 signals, ODU1 signals , ODUflex


signals and the ODU2/ODU2e signals.
 NS4 converts signals as follows:

 80xODU0/80xODUflex/40xODU1/10xODU2/10xODU2e/2xODU3/1xODU4<-
>1xOTU4

 Supports mixed transmission of ODU0, ODU1, ODUflex, ODU2, ODU2e, and


ODU3 signals.
 TN54NS4/TN57NS4/TN58NS4:

 In the enhanced OptiX OSN 8800 T64/T32 subrack and OptiX OSN 8800 T16
subrack, the board can work either in line mode or relay mode. When the
board works in line mode, the enhanced OptiX OSN 8800 T64 subrack must
use the TNK2USXH+TNK2UXCT boards and the enhanced OptiX OSN 8800
T32 subrack must use the TN52UXCH/TN52UXCM board and the OptiX OSN
8800 T16 subrack must use the TN16UXCM board.

 In the general OptiX OSN 8800 T64 subrack, OptiX OSN 8800 UPS and
general OptiX OSN 8800 T32 subrack, the board can work only in relay mode.

 TN56NS4:

 In the enhanced OptiX OSN 8800 T64/T32 subrack, general OptiX OSN 8800
T32 and OptiX OSN 8800 T16 subrack, the board can work either in line
mode or relay mode. When the board works in line mode, the enhanced
OptiX OSN 8800 T64 subrack must use the TNK2USXH+TNK2UXCT boards
and the enhanced OptiX OSN 8800 T32 subrack and general OptiX OSN 8800
T32 must use the TN52UXCH/TN52UXCM board and the OptiX OSN 8800
T16 subrack must use the TN16UXCM board.

 In the general OptiX OSN 8800 T64 subrack and OptiX OSN 8800 UPS, the
board can work only in relay mode.
 The TN52ND2/TN53ND2 board for the OptiX OSN 8800 universal platform
subrack only supports relay mode.
 The Any service cross-connection can be configured only on the inter-board cross-
connection. The XCS board is not required.
 For the GE/Any services passing through, perform the following configurations:

 First, add a cross-connection from the service source OP port to the LP port.
For the OTU board that is fixedly connected to the OP port and LP port (the
OTU board with tributary-line boards), skip this step.

 Add a cross-connection from the LP port to the LP port of another OTU


board.

 Finally, add a cross-connection from the LP port of another OTU board to the
service sink OP port. Similarly, for the OTU board with fixed connection
between the LP port and the OP port (the wavelength conversion board of
the tributary-line board), this step can be omitted.

 Similarly, for GE services, the equipment supports centralized cross-connections


and cross-connections in paired slots. However, for services at the Any level, the
equipment supports cross-connections only in paired slots.
 NG WDM equipments support centralized and distributed cross-connections and
tributary and line separation features. Therefore, you can configure optical
channels and tributary ports on demand to flexibly construct networks. With the
combination of different NG WDM equipment, a large-granularity service bearer
network covering the backbone core network to the access layer can be provided.

 Signal flows of any granularity can be converged into ODUk channels, and multiple
services of multiple sites can be mixed in the same ODUk, implementing flexible
service grooming and high bandwidth utilization.
 With the OTN+FOADM/ROADM feature, any client-side service can be groomed to
any direction to improve bandwidth utilization and effectively transmit client-side
services, as shown in the above figure.

 Client-side services at any rate are accessed through tributary boards. After
OTN encapsulation, ODUk granularities are flexibly scheduled at the electrical
layer, and the bandwidth is shared. Then, different wavelengths are output
through the line board.

 Through the physical fiber patch cord of the FOADM board or the optical-
layer cross-connection of the ROADM board, signals of different wavelengths
can be transmitted in different directions.

 If signals in different directions do not need to be added or dropped locally,


the optical cross-connections of the FOADM or ROADM board can be
directly transmitted to other directions.
 Reference answer:

 Inter-board electrical cross-connections are configured between boards to


groom ODUk and GE services inside a subrack. They are configured on the
U2000.

 Intra-board cross-connections refer to the cross-connections configured


between different ports on the same board.
 Precautions

 If the subrack is connected in master/slave mode, the NMS computer


must be connected to the master subrack.

 The IP address of the Gateway NE is in the same address segment as the


IP address of the computer.

 Step 1: Check the network cable. One end should be connected to the network
port of the NMS computer. The other end should be connected to the
specified port on the board.

 For the OptiX OSN 8800 T64/T32, connect the other end to the NM_ETH1
port on the EFI2 or the NM_ETH2 port on the EFI1.

 Step 2: Turn on the power switch of the computer and check whether the
indicator of the network adapter port on the computer is steady on.

 Step 3: Check the indicators on the board. The green LINK indicator should be
steady on, and the orange ACT indicator should blink.

 For the OptiX OSN 8800 T64/T32, observe the indicators on the
NM_ETH1 interface of the EFI2 or the NM_ETH2 interface of the EFI1.
 Precautions

 If the subrack is connected in master/slave mode, the NMS computer must


be connected to the master subrack. The IP address of the NE must be in the
same network segment as the IP address of the LAN.

 The network cable connects the NE to the hub, router, or Ethernet switch.

 Step 1: Connect the NMS computer to the LAN.

 Step 2: Check the network cable. The NMS computer is connected to the LAN
through a network cable. The equipment is connected to the LAN through the
interface of the corresponding board.

 For the OptiX OSN 8800 T64/T32, the other end is connected to the
NM_ETH1 port on the EFI2 or the NM_ETH2 port on the EFI1.

 Step 3: Turn on the power switch of the computer and check whether the green
indicator of the network adapter port on the computer is steady on.

 Step 4: Check the indicators on the board. The green LINK indicator should be
steady on, and the orange ACT indicator should blink.

 For the OptiX OSN 8800 T64/T32, observe the indicators on the NM_ETH1
interface of the EFI2 or the NM_ETH2 interface of the EFI1.

 Note: The following procedures are the same as Method 1: Directly connecting to
the NMS computer.
 The user name and password are required for starting the System Monitor and
Client of the U2000.

 On the live network, you are advised to change the default user name and
password as soon as possible according to security management requirements.
 When multiple subracks are required to form an NE, the master/slave subrack
mode is required for unified management. In master/slave subrack mode, multiple
subracks are displayed as one NE on the U2000 and Web LCT, and the slave
subrack does not need to be assigned an independent NE ID and IP address.
 You can manage the NE through the NMS only after the NE is created. Creating a
single NE is not as convenient and accurate as creating NEs in batches. However,
the method of creating a single NE can be used regardless of whether the data is
configured on the NE side. The method of creating a single NE can be used
regardless of the communication mode of the NE. The NE that uses the serial port
communication does not support the equipment search function and must be
created one by one. On the U2000, NG WDM equipment does not support pre-
configuration.
 Step 1: Right-click in the Main Topology and choose New > NE (E) from the
shortcut menu. The Create NE dialog box is displayed.
 Step 2: In the Object Type navigation tree, select the equipment type of the NE to
be created.
 Step 3: Enter the ID, extended ID, name, and remarks of the NE.
 Step 4: To create a gateway NE, select Step 5. To create a non-gateway NE, select
Step 6.
 Step 5: Select Gateway from the Gateway Type drop-down list and set IP Address.
 Step 6: Set Gateway Type to Non-Gateway and select the gateway to which the NE
belongs.
 Step 7: In the Optical NE area, select the optical NE to which the WDM NE belongs.
When creating an OptiX OSN 1800/8800/9800 NE, you do not need to select the
optical NE to which the NE belongs. In this case, the NE is directly created in the
Main Topology.
 Step 8: Enter the NE user name and password. The default NE user name is: root,
the default password is: password or Changeme_123.
 Step 9: Click OK. The cursor is displayed as +. In the Main Topology, select the NE
location and click OK. The NE is successfully created.
 The U2000 can directly search for NEs that are connected through the TCP/IP or
ECC and create them in batches. This method of creating NEs is faster and more
reliable than that of creating NEs manually. Therefore, you are advised to create
NEs in batches.

 Step 1: Choose File > Search > NE from the main menu. The NE Search window is
displayed.

 Step 2: Click the Transport NE Search tab.

 Step 3: In the Search Type area, select the type to be searched for.

 Set Search Type to Search NE.

 In the Search Domain dialog box, click Add. The Search Domain Input
dialog box is displayed.

 Set Address Type to Gateway NE IP Network Segment, Gateway NE IP


Address, or NSAP Address, and enter Search Address, User Name, and
Password. And click OK.

 In the Search NEs area, perform the following operations: Select Create
NE after search, and enter NE User and NE Password. The default NE
users are as follows: The default password of user root is: password or
Changeme_123. If you select Upload immediately after creating an NE,
the NE data is uploaded to the NMS after the NE is created.

 Set Search Type to IP Auto Discovery. If you cannot enter the correct network
segment, you can start automatic IP address discovery. By using the
automatic IP address discovery function, you can obtain the IP address of the
gateway NE and all the NEs under the gateway NE.
 The U2000 defines four types of optical NEs: WDM_OTM, WDM_OLA,
WDM_OADM, and WDM_OEQ. OEQ is an OLA NE that implements optical power
equalization and dispersion equalization. If a board with dispersion compensation
and power compensation exists on an OLA NE, the NE should be changed to an
OEQ NE.

 In NG WDM products, the first three types of optical NEs are usually used.
 Optical NEs are different WDM sites.

 Step 1: Right-click in the Main Topology and choose New > NE from the shortcut
menu.

 Step 2: In the Create NE dialog box, click corresponding to Optical NE in the left
pane and select the type of the optical NE to be created.

 Step 3: Click Basic Attributes, and enter the basic attributes such as the optical NE
name according to the customer plan.

 Step 4: Click Resource Division, select an NE or board from the idle resource
optical NE, and click the double arrow on the right to move the NE or board. To re-
allocate resources for the created optical NEs, right-click the optical NE and
choose Properties from the shortcut menu. Click the Resource Division tab, select
the NE or board to be added to the optical NE in the left pane, and click to add the
NE or board to the optical NE.

 Step 5: Click OK.

 Step 6: Click in the Main Topology to create an optical NE icon.


 The NE is still in the unconfigured state after being created. The NMS can manage
and operate the NE only after the NE data is configured.

 Initialize and Manually Configure NE Data

 You can manually configure NE board slot information by manually


configuring NE data.

 Copy NE Data

 By copying NE data, you can copy the NE data of the same NE type and NE
version to the new NE.

 Upload

 Upload is the most common way to configure NE data. By uploading NE data,


you can directly upload the current NE configuration data, such as NE
configuration data, alarm data, and performance data to the U2000. You are
advised to use the upload mode to configure NE data.
 Step 1: In the Main Topology, double-click an optical NE that contains an
unconfigured NE. Double-click an NE that is not configured in the left pane. The
NE Configuration Wizard dialog box is displayed.

 Step 2: Select Initialize and Manually Configure NE Data and click Next. The
Confirm dialog box is displayed, indicating that manual configuration will clear NE
data.

 Step 3: Click OK. The Confirm dialog box is displayed, indicating that manual
configuration interrupts NE services.

 Step 4: Click OK. The Set NE Attributes dialog box is displayed.

 Step 5: Optional: To modify NE attributes, set NE Name, Type, NE Remarks,


Subrack Type, Service Type, and Service Capacity.

 Step 6: Click Next. The NE slot page is displayed.

 Step 7: Optional: Click Query Logic Information to query the logical board of the
NE. That is, query the original board configuration information recorded on the
SCC board.

 Step 8: Optional: Click Query Physical Information to query the physical board of
the NE. That is, query the information about the current hardware online board.
Logical information and physical information cannot be queried for preconfigured
NEs.

 Step 9: Optional: Right-click an NE slot and add a board as required.

 Step 10: Click Next. The Deliver Configuration page is displayed.


 The NE type and NE version of the source NE must be the same as those of the
copied NE.

 Step 1: In the Main Topology, double-click an optical NE that contains an


unconfigured NE. Double-click an NE that is not configured in the left pane. The
NE Configuration Wizard dialog box is displayed.

 Step 2: Select Copy NE Data and click Next. The NE Copy dialog box is displayed.

 Step 3: Select the source NE from the drop-down list and click Start. The Confirm
dialog box is displayed, indicating that all data of the source NE will be copied.
Copying NE data changes only the data on the U2000 but does not change the
data on the NE.

 Step 4: Click OK. The Confirm dialog box is displayed, indicating that the original
data of the NE will be lost.

 Step 5: Click OK. The replication process is displayed. Wait for several seconds. The
Operation Result dialog box is displayed.

 Step 6: Click Close.


 Procedure:

 Step 1: In the Main Topology, double-click an optical NE that contains an


unconfigured NE. Double-click an NE that is not configured in the left pane.
The NE Configuration Wizard dialog box is displayed.

 Step 2: Select Upload and click Next. The Confirm dialog box is displayed.

 Step 3: Click OK. The uploading process is displayed. After the upload is
complete, the Operation succeeded dialog box is displayed.

 Step 4: Click Close.


 Procedure:

 Step 1: Choose Configuration > NE Configuration Data Management from


the main menu.

 Step 2: Select the created NE from the topology tree on the left, click the
double arrow on the right, and select the NE whose NE status is
Unconfigured from the Configuration Data Management list.

 Step 3: Click Upload. The Confirm dialog box is displayed. Click OK. The
uploading process is displayed.

 Step 4: After the upload is complete, the Operation Result dialog box is
displayed. Click Close.
 By adding a slave subrack, multiple subracks can communicate with the U2000 or
Web LCT through the same master subrack. In this way, the service grooming
capability of the equipment is enhanced. After creating an NE manually, you must
add a logical slave subrack.
 Pay attention to the following items for OTU boards:

 Laser status. When the board is used or during the commissioning, the laser
status should be Enabled.

 During the commissioning of the ALS status, the ALS function must be
disabled forcibly.

 Set interface parameters for WDM boards based on project requirements.


Different boards support different parameters, but the configuration procedures
are the same. You can also query all interface parameters.

 Step 1: In the NE Explorer, select a board and choose Configuration > WDM
Interface from the Function Tree.

 Step 2: Select By Board/Port (Channel), and then select Channel from the drop-
down list. When you select By Function, you can query and set board and channel
parameters from the function perspective.

 Step 3: In the Basic Attributes and Advanced Attributes tabs, double-click the
parameter field to modify or set the attributes of each optical port or board.

 Step 4: Click Apply.

 Step 5: Click Query. In the Operation Result dialog box that is displayed, click Close.
The parameters of the board attributes are the same as the configured values.
 Note:
 For the FEC, ensure that the FEC working status and FEC type of the transmit-
end board are the same as those of the receive-end board.
 For the wavelength of the OTU board, the U2000 provides two parameters:
Configure Wavelength and Actual Wavelength. The actual wavelength is the
wavelength that is being used by the board, and the configure wavelength is
used to configure the OTU board with the tunable wavelength. For a board
that supports tunable wavelengths, set the Wavelength parameter to change
the actual wavelength of the OTU board. For a board with fixed wavelengths,
you need to manually set the wavelength to be the same as the actual
wavelength.
 Set the receive wavelength of the board. If the actual receive wavelength of the IN
port on the board is different from the configured receive wavelength, services are
unavailable. In this case, change the receive wavelength of the board to be the
same as the actually received wavelength.
 Step 1: In the NE Explorer, select the corresponding board and choose
Configuration > WDM Interface from the Function Tree.
 Step 2: Select By Board/Port (Channel), and then select Channel from the
drop-down list.
 Step 3: In the Advanced Attributes tab, double-click the Receive Band Type
field and select C.
 Step 4: Double-click the Receive Wavelength field to set the required
wavelength.
 Step 5: Click Apply.
 Before creating a fiber, you are advised to set Wavelength No. / Wavelength (nm) /
Frequency (THz) of the port of the tunable OTU to the planned wavelength.

 Fiber Creation Mode

 Create fibers in synchronous mode.

 After the equipment commissioning is complete, the fiber connection


information may exist on the NE. You can synchronize the fiber
connection information on the U2000 to the NMS.

 Manually Creating Fibers in Graphic Mode

 Creating fiber connections in graphic mode is performed in the main


view or signal flow diagram, which is more intuitive. This mode is
applicable to the scenario where a large number of fibers need to be
created one by one.

 Manually Creating Fibers in List Mode

 In the Fiber/Cable Management window, you can manage all NEs and
fibers inside NEs in a unified manner. Compared with creating fibers in
graphics, creating fibers in lists is not intuitive. It is applicable to the
scenario where a small number of fibers need to be created.
 Procedure (Single NE):
 Step 1: In the NE Explorer, select an NE and choose Configuration > Fiber
Synchronization from the Function Tree.
 Step 2: Click Synchronize. The internal fiber connections on the U2000 and
NEs are displayed.
 "Consistent optical fiber": Fibers that exist on both the U2000 and NEs
and have the same information.
 No fiber is created on the U2000. Indicates the fiber that exists only on
the NE side.
 No fiber is created for the NE. Only fibers that exist on the NMS side.
 Step 3: For different situations, you can perform the following operations:
 If there are fibers that are not created on the NMS or the fibers that are
not created on the NE, select all the fibers and click Create Fiber/Cable.
In the dialog box that is displayed, click Close. The synchronized fibers
are displayed in the Consistent Fiber list.
 If there are conflicting fibers, you cannot create them. You need to click
Delete Fiber/Cable to delete the conflicting fibers or do not create fibers
for the NEs. Then, click Create Fiber/Cable to re-create the remaining
fibers. A conflict fiber refers to a fiber that is configured on the NE side
and is inconsistent with that configured on the NMS side. After you click
Synchronize and Create Fiber/Cable, the fiber is not created on the
U2000 and the fiber is not created on the NE. The fibers that conflict
with each other are displayed in. Conflicting fibers cannot be
synchronized between the U2000 and NEs. In this case, you need to
delete the incorrect fibers according to the networking design. The
remaining fibers can be created by creating fibers.
 Create an internal fiber connection for the NE.

 Step 1: In the Main Topology, double-click the optical NE icon and click the
Signal Flow Diagram tab.

 Step 2: Right-click in the blank area of the Signal Flow Diagram and choose
New Fiber from the shortcut menu. The cursor is displayed as a plus sign (+).

 Step 3: Select the source board and source port, and click OK. The cursor is
displayed as a plus sign “+”.

 Step 4: Select the sink board and sink port, and click OK. If the sink/source
board or port is incorrect, you can right-click the sink/source board or port
and choose from the shortcut menu to exit the object selection.

 Step 5: In the Create Fiber/Cable dialog box, set the attributes of the fiber.

 Step 6: Click OK to complete the creation. To delete a fiber, right-click the


fiber and choose Delete from the shortcut menu.

 Create fiber connections between NEs. The creation of fiber connections between
NEs is completed in the Main Topology. The purpose is to complete the FIU fiber
connections between sites.

 Step 1: In the Main Topology, select the shortcut icon. The cursor is displayed
as "+".

 Step 2: In the Main Topology, click the source NE of the fiber/cable.

 Step 3: In the Select Fiber Source dialog box, select the source board and
source port.
 Step 1: Choose Inventory > Fiber/Cable/Microwave Link > Fiber/Cable/Microwave
Link Management from the main menu.

 Step 2: Click Create. The Create Fiber/Cable dialog box is displayed.

 Step 3: Click Object Selection. In the displayed Select NE Object dialog box, select
all the NEs whose fibers need to be created.

 Step 4: Click OK.

 Step 5: In the Batch Create Fiber/Cable dialog box, click Create.

 Step 6: Set Direction, Source NE, Source Port, Sink NE, and Sink Port.

 Step 7: Click Apply. In step 6, you can create multiple fibers/cables and set related
parameters. Then, click Apply.

 Step 8: The Operation Result dialog box is displayed, indicating that the operation
is successful.

 Step 9: Optional: Repeat step 6~8 to create the next fiber connection.

 Step 10: After creating all fibers, click Cancel to exit the Batch Create Fiber/Cable
dialog box. All created fibers are displayed in the Fiber/Cable Information list.

 Step 11: When you place the cursor on the created fiber, the information about
the fiber is displayed. Check whether the fiber is correctly created.

 Note: A two-fiber bidirectional fiber can be used to implement bidirectional


transmission between transport equipment. When a single-fiber unidirectional
fiber is created, you need to create a reverse fiber.
 For an NE that is not configured with the NTP service, to ensure that the NMS can
accurately record the alarm generation time, you need to periodically check
whether the time on the NE is consistent with that on the NMS during routine
maintenance. If they are inconsistent, manually synchronize the time between the
NE and the NMS.
 Synchronizing NE time does not affect the normal running of services. Before
synchronizing NE time, ensure that the system time of the U2000 server computer
is correct. To change the system time of the server computer, log out of the U2000,
reset the system time of the computer, and then restart the U2000.
 Step 1: In the NE Explorer, select an NE and choose Configuration > NE Time
Synchronization from the Function Tree. In the Operation Result dialog box, click
Close.
 Step 2: Right-click an NE and choose Synchronize with NMS Time from the
shortcut menu. In the displayed dialog box, click Yes.
 Step 3: In the Operation Result dialog box that is displayed, click Close.
 Batch operation
 Step 1 Choose Configuration > NE Batch Configuration > NE Time Synchronization
from the main menu.
 Step 2 Select an NE from the Object Tree on the left and click the double arrow on
the right.
 Step 3 In the Operation Result dialog box, click Close.
 Step 4 Set Synchronization Mode to NMS and click Apply.
 Step 5 Set the synchronization start time and automatic synchronization period
(day), and click Apply.
 Note: The synchronization start time cannot be earlier than the current time.
 You can set the performance monitoring parameters of an NE and start the
performance monitoring of the NE to obtain the detailed performance records of
the NE during the running of the NE. In this way, the maintenance personnel can
monitor and analyze the running status of the NE.

 Step 1: Choose Performance > NE Performance Monitoring Time from the main
menu.

 Step 2: Select a subnet or NE from the navigation tree in the left pane, and then
click the double arrow on the right.

 Step 3: Select the NE whose performance monitoring function is to be enabled.

 Step 4: Select the 15 minutes or 24 hours check box as required. In the Set 15-
Minute Monitoring or Set 24-Hour Monitoring dialog box, click Open.

 Step 5: Click …., and set the monitoring start time and end time as required.

 The start time must be later than the current time of the U2000 and NEs. If
you need to start monitoring immediately, set Start Time to a time later than
the current time of the U2000 and NEs.

 To set the end time, select the End time check box, and the end time must be
later than the start time. If you do not select the End Time check box, the
monitoring function is always enabled.

 Step 6: Click Apply. The Set Monitoring Time dialog box is displayed, showing the
progress. The progress is automatically closed after the progress is 100%.

 Step 7: In the Operation Result dialog box, click Close.


 During routine maintenance, you need to back up the NE database to ensure that
the data on the SCC board of the NE is automatically restored after the equipment
is powered off. Back up the NE database to the SCC board is used to back up the
NE data in the DRDB database of the SCC board to the flash database. After the
NE is powered off and restarted, the SCC board automatically reads the
configuration from the flash memory and delivers the configuration to the board.

 By default, the U2000 automatically backs up the NE database to the flash memory
every 30 minutes.

 Step 1: Choose Configuration > NE Configuration Data Management from the


main menu.

 Step 2: Select an NE from the function tree on the left and click the double arrow
on the right.

 Step 3: In the Configuration Data Management List area, select one or more NEs
whose databases need to be backed up. Click Back Up NE Data and select Back Up
Database to SCC. In the confirmation dialog box, click OK.

 Step 4: In the Operation Result dialog box, click Close.


 During routine maintenance, you need to back up the NE database. When the SCC
board is configured with a CF card, you can manually back up the NE data in the
DRDB database on the SCC board to the CF card. This ensures that the NE
automatically recovers when the data in the DRDB database on the SCC board is
lost or after a power failure happens.

 Step 1: Choose Configuration > NE Configuration Data Management from the


main menu.

 Step 2: Select an NE from the Object Tree on the left and click the double arrow on
the right.

 Step 3: Select one or more NEs from the Configuration Data Management List.

 Step 4: Click Back Up NE Data and select Manually Back Up Database to CF Card.

 Step 5: In the confirmation dialog box, click OK.

 Step 6: In the Operation Result dialog box, click Close. The backup operation may
take several minutes. Please wait.

 Note:

 Restoring the NE database obtains data from the CF card.

 The CF card is installed on the SCC board and can be removed and inserted.
 The U2000 supports two modes: Local FTP server and third-party FTP server. If the
number of managed NEs exceeds 5000, you are advised to use a third-party FTP
server to transfer files between NEs and the NMS server.

 Local FTP service mode: That is, the U2000 server functions as the FTP server.
During U2000 installation, a FTP/SFTP user is created and created by default. This
user is used to transfer files between the client, NE, and U2000 server.

 Third-party FTP service mode: That is, configure another computer as the FTP
server. In this mode, you need to install the FTP/SFTP service on the computer and
create the FTP/SFTP user. Then, you can transfer files between the client, NE, and
third-party FTP server.

 The FTP service used by the U2000 can transfer files in two modes: FTP and SFTP.
SFTP is more secure than FTP. To ensure system security, SFTP is recommended.

 After the U2000 is installed, the FTP service application has a default FTP account,
and the default FTP account is bound to a default FTP user. If an application wants
to use an independent FTP account to use the FTP service, you need to create an
FTP user, create an FTP account for the FTP user, and add the new FTP account to
the application. You are advised to use the default FTP account without adding an
FTP user.
 Prerequisites

 Ensure that the FTP service is enabled on the FTP server.

 Ensure that the FTP user used by the application has been created on the FTP
server and the FTP user information has been configured.

 Operation Procedure

 Choose System > Settings > FTP Account Information Management


(traditional style), or double-click System Management in Application Center
and choose Settings > FTP Account Information Management (application
style).

 In the FTP Account Information Management dialog box, click the FTP
Account Configuration tab.

 On the FTP Account Configuration tab page, right-click an FTP account to be


tested and choose Test from the shortcut menu.

 In the displayed dialog box, click Test FTP or Test SFTP to test the availability
of the FTP or SFTP function of the FTP server.

 If the test is successful, click Close in the Operation Result dialog box.

 If the test fails, check whether the FTP user exists in the background, whether
the FTP user name and password are correct, whether the FTP service is
started, and whether the user directory permission is correct. Perform the
test again after the confirmation is complete.
 Procedure

 Step 1: Choose Administration > NE Software Management > NE Data


Backup/Restoration from the main menu.

 Step 2: Right-click in the NE View list. A shortcut menu is displayed. When


multiple equipment are selected, the Backup Information tab is unavailable.

 Step 3: Click Backup. The Backup dialog box is displayed.

 Step 4: Select the NM Server option button or NM Client to back up the


selected equipment information.

 By default, the NM Server option button is selected.

 If you select the NM Server option button, the file will be backed up on
the NMS server.

 Step 5: Optional: If you select the NM Client option button, click to select the
path for backing up the equipment data.

 Step 6: Click Start. The NE View tab page displays the backup progress.

 Step 7: After the backup is successful, the U2000 creates the following
directories in the specified path: NE-Name/NE-
Name_yymmdd_hhmmss/dbf.pkg. In the preceding information, NE-Name
indicates the NE name, yyyymmdd indicates the year, month, and day, and
hhmmss indicates the backup time.
 Answer: Back up the NE database to the SCC board or CF card and back up NE
data to the NMS server or client.
 Optical grooming is the configuration of logical wavelength routes, realized by
optical cross connection. This function meets the user's requirement of managing
the services at the optical layer. Products provide flexible optical grooming. When
there are changes in the services, users need only to make configuration
accordingly on the U2000.
 Notes: To ROADM station, we must creating optical cross-connection, but to
FOADM and OLA station, it is unnecessary.
 Default Edge Port: The optical interface on a board that can serve as the source or
sink port of an optical cross-connection by default.

 Available Edge Port: Before being set as an optical cross-connection source or sink
port, the Available Edge Port must be set to Selected Edge Port. After an Available
Edge Port is set as the source or sink port of an optical cross-connection, the port
is not displayed as an Available Edge Port that can be used by other optical cross-
connections.

 Selected Edge Port: The board interfaces that is selected from the available edge
ports and can serve as the source or sink port of an optical cross-connection.
When creating an optical cross-connection, select board optical interfaces as the
source and sink ports of the cross-connection based on the network design. In this
way, the route for the cross-connection is created and the optical signal grooming
between the source and sink ports is achieved.
 Prerequisite:

 Fiber connection must be correctly created for the WDM equipment.

 When creating an optical cross-connection of a single station, make sure that


the optical cross-connection of a board in this single station does not occupy
the wavelength that the optical cross-connection of the single station uses.
 Before configuring wavelength grooming based on the configuration flow,
complete the basic configuration of NEs according to the configuration flow of
creating a network.

 If configuring the single-station cross-connection, you can create the logical fiber
connection between NEs and between boards that are inside the NEs on the
U2000. Or create the logical fiber connection between NEs on the U2000 and the
logical fiber connection between boards that are inside the NEs on the Web LCT.
 Before creating a single-station optical cross-connection, configure the
corresponding port on the board as a edge port.

 For ports on the FIU unit and OTU unit line side, the edge port is not required
and the system defaults to "fixed edge port". If a port has already added a
fiber connection between the NEs, it automatically becomes the edge port of
the NE.

 The port is no longer able to add fiber connections between single boards
within the NE.

 If a port has already added a fiber connection between a single board in the
network element, then it can no longer be configured as a edge port for the
network element.

 If you want to change the selected edge port, select the corresponding port from
the Selected Edge Ports, and then click to add the port to Available Edge
Ports.
 In the NE Explorer, select an NE and choose Configuration > Optical Cross-
Connection Management from the Function Tree. Click the Single-Station Optical
Cross-Connection tab.
 U2000 Operation procedure:

 In the NE Explorer, select the NE icon and choose Configuration > Optical
Cross-Connection Management from the Function Tree. In the right pane,
click the Single-Station Optical Cross-Connection tab.

 Click New. The Create Optical Cross-Connection Service dialog box is


displayed. Enter the source and sink ports of the optical cross-connection
service and select the corresponding wavelength number.

 The default optical-layer service configuration is positive optical cross-


connection, you can select “Create Reverse Optical Cross-connection”.

 Click Apply. A dialog box is displayed, indicating that the operation is


successful. Click Close. The created single-site optical cross-connection is
displayed in the window.

 When creating a single-station optical cross-connection, you can set the optical
power adjustment mode to automatic or manual. If Mode is set to Automatic, the
optical add/drop multiplexing unit can be used to automatically adjust the optical
power. If Manual is selected, you can only manually adjust the optical power.
 NE B is an OLA site and does not need to be configured with single-station optical
cross-connections. NE C is an OTM station and needs to be configured with single-
station optical cross-connections. The configuration process is the same as that of
NE A.
 To manage WDM trails, you need to search for cross-connections and fiber
connection data on the network to form end-to-end WDM trails at the network
layer of the U2000.
 If the pass-through station is OLA or FOADM, the single-station optical cross-
connection does not need to be configured. If the pass-through station is ROADM,
the single-station optical cross-connection needs to be configured.
 The intra-board optical wavelength route can be set for a board that performs
grooming at the optical layer. The intra-board service route is established through
the creation of single-board optical cross-connection.

 Procedure:

 Click the NE in the NE Explorer, and choose Configuration > Optical Cross-
Connection Management from the Function Tree. Click Board-Level Optical
Cross-Connection tab in the right-hand pane.

 Click Create. The Create Optical Cross-Connection window is displayed.

 Select the source slot, sink slot, source port and sink port. Click the
button on the right of Source Wavelength or Sink Wavelength. Select the
wavelengths from the Available Wavelengths list. Click to add the
wavelengths to Selected Wavelengths.

 Click OK.

 Click OK. A dialog box is displayed, indicating that the operation is successful.
Click Close. The created board optical cross-connection is displayed in the
window.
 Similar to the single-station method, to manage WDM trails, you need to search
for cross-connections and fiber connection data on the network to form end-to-
end WDM trails at the network layer of the U2000.
 The U2000 provides the functions of creating, browsing, merging, separating, and
deleting E2E WDM trails. In addition, it provides the signal flow diagram of trails,
which intuitively shows the signal flow of trails and improves the operation and
maintenance efficiency.

 Currently, the U2000 provides OTN trail models in compliance with ITU-T G.872.
OTN trails include the following types:

 Client trails

 ODUk trails

 OTUk trails

 OCh trails

 OMS trails

 OTS trails

 OSC trails

 For an ROADM network, OCh trails can be created. If the optical cross-connections
of all sites that the optical-layer service traverses have been created, the OCh trail
can be automatically searched out.

 For FOADM networks, OCh trails can be automatically searched and generated.
 Procedure:

 On the main menu of the U2000, choose Service > WDM Trail > Create WDM
Trail.

 In the displayed window, set Level to OCh.

 Click Browse on the right side of the Source field. In the displayed dialog box,
select the required NE and specify a port on the NE as the source port of the
E2E trail. Then use the same procedure to specify the sink port.

 Optional: If the routes that are automatically computed are different from the
planned ones, you must specify route constraints.

 After the trail computation is complete, the server-layer route information of


the to-be-created trail and the port attribute list are displayed at the bottom
of the topology view.

 Click the Server Layer Route Details tab to view the server-layer route
information of the working and protection trails.

 Click the Port Attributes Settings tab to view and modify the port
attributes of the to-be-created trail and its server-layer trails.

 Click Apply to complete the trail settings. The Create Trail dialog box is
displayed, which shows the trail creation progress. Wait until the Operation
Result dialog box displays the message operation succeeded.

 Click Close to close the dialog box.


 If the automatically calculated route is different from the planned route, you need
to set route constraints.

 Click the Explicit Node tab, right-click, and then choose Add NE Constraint or Add
Board/Port Constraint. In the dialog box that is displayed, you can set an NE,
board, or port as the explicit node of the trail to be created.
 After a trail is created, an alarm is generated on the trail because of the abnormal
power. In this case, you need to commission the optical power.
 Answer:

 No. 1 can support optical cross-connection creation. (In this mode, the
system can work normally without configuring optical cross-connection)

 No. 2 can support optical cross-connection creation too. (In this mode, the
system must be configured with optical cross-connection)
 Intra-board cross-connection: After being processed by the cross-connect unit, the
service signals are still inside the board. As shown in the following figure, the
cross-connect board connects to channel 1 of client-side port 5 (RX3/TX3) on the
same board and channel 1 of port 201 (LP1/LP1) on the WDM side.

 Inter-board cross-connection: After being processed by the cross-connect unit, the


service signals are sent to the cross-connect unit of the other board through the
backplane. As shown in the following figure.

 When a cross-connect board is configured, cross-connections can be configured


between two universal slots, that is, centralized cross-connections.

 When no cross-connect board is configured, cross-connections can be configured


between adjacent slots, that is, distributed cross-connections.
 For WDM equipment, the OTU board, tributary board, and line board work
together to complete service grooming. Client services are transmitted from the
client side of the WDM equipment, and then modulated to the WDM system for
transmission after service grooming and convergence. The figure considers the
OTU board with the GE/Any and ODUk cross-connection function as a module to
describe the signal flow of the electrical cross-connections.
 The boards that support the grooming of electrical cross-connections have both
external ports and internal ports. These ports are classified into the following types:
 TX/RX port: client-side optical port of the board that receives and transmits
signals.
 IP port: internal port that corresponds to the RX/TX port. It can be regarded
as a RX/TX port.
 AP port: convergence port that represents the internal port of the L2 module.
In this case, the corresponding IP port is an external port.
 LP: logical port that functions as the connection point of cross-connections.
 OP port: internal port that corresponds to the IN/OUT port. It can be
regarded as an IN/OUT port.
 IN/OUT port: line-side optical port of the board that receives and transmits
signals.
 The signals are cross-connected in the following process:
 The optical signals are transmitted to the OTU board through the RX/TX port
and become electrical signals. After the possible L2 processing, the electrical
signals are transmitted to the GE/Any cross-connect module through the AP
port and work with the possible cross-connect signals from the backplane, to
implement the GE/Any cross-connections.
 The electrical signals are transmitted to the ODUk cross-connect module
through the LP port and work with the possible ODUk signals from the
backplane, to implement the ODUk cross-connections. Then, the signals are
transmitted to the optical module through an OP port and added to the
WDM line for transmission.
 NE A, NE B, and NE C: OptiX NG WDM Device.

 NE A and NE C use TOA+HUNQ2 to add and drop services.

 Prerequisites:

 Physical fibers and logical fibers have been correctly connected.

 OCh trails have been created.


 The optional actions must be configured in the following scenarios:
 Configure the port type: Port Type must be set to Client Side Color Optical
Port when colored optical signals are received on the client side.
 Configure the timeslot configuration mode: ODU Timeslot Configuration
Mode must be set for the line board that is interconnected with the TOA
board.
 When the port working mode of the TOA board is ODUflex, ODU
Timeslot Configuration Mode must be set to Assign random for the line
board that is interconnected with the TOA board.
 In other port working modes, set ODU Timeslot Configuration Mode for
the line board to the same as the value that is set on the interconnected
line board. The recommended value is Assign random.
 Configure the service mode: When Service Type is set to OTU1, Service Mode
must be set to OTN Mode first.
 Configure the ODUflex tolerance(ppm): For the line board which is
interconnected with the TOA board, configure this parameter when the port
of the TOA board works in ODUflex mode. This parameter is reserved and
optional in configuring service types which are currently supported.
 Configure cross-connections from the client side to ClientLP ports:
 Compatible Mode: This action is required only for the ODU0 non-
convergence mode and ODU1 convergence mode.
 Standard Mode: This action is required only for the ODU1 convergence
mode.
 Configuration Procedure:

 Configure the working mode for the TOA board. Set Port Working Mode to
ODU1 convergence mode (n*Any->ODU1).

 Click Apply.

 The port working mode of TOA board:

 ODU0 non-convergence mode: 8 x (125Mbit/s~1.25Gbit/s signal) < -> 8 x


ODU0

 ODU1 non-convergence mode: 8 x (1.49Gbit/s~2.67Gbit/s signal) < -> 8 x


ODU1

 ODU1 convergence mode: 8 x (125Mbit/s~2.5Gbit/s signal) < -> (1~8) x


ODU1

 ODU1_ODU0 mode: 8 x OTU1 < – > 16 x ODU0

 ODUflex non-convergence mode:

 5 x 3G-SDI <—> 5 x ODUflex

 4 x FC400/4 x FICON 4G <—> 4 x ODUflex

 In this case, we use ODU1 convergence mode (n*Any->ODU1) as example.


 Configure the service type of client-side ports (NE A and NE C).

 In the NE Explorer of NE A and NE C, select the TOA board and choose


“Configuration > WDM Interface” from the Function Tree.

 On the Basic Attributes tab, set the service type of the client-side port.

 Click Apply.
 The value of Rate can be Standard mode or Speed-up mode. When the WDM-side
signal is OTU2e or 10GE LAN services are received on the client side and the
service mapping path of the client-side port is set to Bit Transparent Mapping
(11.1G), this parameter must be set to the speed-up mode. Otherwise, the
standard mode is used.
 In the NE Explorer of NE A, select the NE and choose Configuration > WDM
Service Management from the Function Tree.
 Click the WDM Cross-Connection Configuration tab.
 Create intra-board cross-connections.
 Click New. The Create Cross-Connect Service dialog box is displayed.
 Select the corresponding level and service type, and then enter other
parameters of the service.
 Click OK. The Operation Result dialog box is displayed, indicating that the
operation is successful. Click Close to complete the creation.
 Click Query. The created cross-connections are displayed in the WDM Cross-
Connection Configuration window.
 Creating inter-board cross-connections.
 Click New. The Create Cross-Connect Service dialog box is displayed.
 Select the corresponding level and service type, and then enter other
parameters of the service.
 Click OK. The Operation Result dialog box is displayed, indicating that the
operation is successful. Click Close to complete the creation.
 Click Query. The created cross-connections are displayed in the WDM Cross-
Connection Configuration window.
 NE C uses the same configuration.
 Choose Service > WDM Trail > WDM Trail Search from the main menu.

 In the Advanced Settings area, set various processing policies.

 In the lower right corner of the window, click Next to start searching for trails. Wait
until the progress bar is updated.

 Click Next to view the searched trails.

 Click Next to view all discrete services on the network.

 After the search is complete, click Finish.

 In the dialog box that is displayed, click OK.

 Follow-up Procedure:

 Choose Service > WDM Trail > WDM Trail Management from the main menu.

 On the Basic Settings tab page, select the desired service level from the
Service Level drop-down list.

 Click Filter All. In the WDM Trail Management window, ensure that the trail of
the subnet to be queried is consistent with the network design.
 The port working mode of TOA board:

 ODU0 non-convergence mode: 8 x (125Mbit/s~1.25Gbit/s signal) < -> 8 x


ODU0

 ODU1 non-convergence mode: 8 x (1.49Gbit/s~2.67Gbit/s signal) < -> 8 x


ODU1

 ODU1 convergence mode: 8 x (125Mbit/s~2.5Gbit/s signal) < -> (1~8) x


ODU1

 ODU1_ODU0 mode: 8 x OTU1 < – > 16 x ODU0

 ODUflex non-convergence mode:

 5 x 3G-SDI <—> 5 x ODUflex

 4 x FC400/4 x FICON 4G <—> 4 x ODUflex

 In this case, we use ODU1 convergence mode (n*Any->ODU1) as example.


 Configuration Procedure:

 Configure the working mode for the TOA board. Set Port Working Mode to
ODU1 convergence mode (n*Any->ODU1).

 Click Apply.
 In the NE Explorer, select the desired board and choose Configuration > WDM
Interface from the Function Tree.

 Select By Board/Port (Channel), and then select Channel from the drop-down list.

 On the Basic Attributes tab page, select the optical port for which you want to set
the service type, double-click the Service Type field, and select the required service
type.

 Click Apply.
 Choose Service > WDM Trail > Create WDM Trail from the main menu.

 In the Create WDM Trail window, set the following parameters:

 Level: Client.

 Direction: Bidirectional.

 Rate: GE(GFP-T).

 Set source port.

 In the Physical View, double-click NE A. In the Board Port Selection -Source


window, select the TOA board in the corresponding slot as the service access
board.

 Select the port and channel for service access, and then click OK.

 Set the sink port in the same way.

 Click Apply. The client trail is created. The Operation Result dialog box is
displayed, indicating that the operation is successful. Click Close.

 To view the created trail, choose Service > WDM Trail > WDM Trail Management
from the main menu. In the WDM Trail Management window, set filter criteria to
view the created trail.
 NE batch configuration,this mode helps you configure service packages for all
involved boards in batches on the NMS.

 Choose Configuration > NE Batch Configuration > Service Package


Configuration from the main menu.

 In the Service Package Configuration window, click the Board Type drop-
down list to select the required board type.

 In the Service Package window, select the name of the service package that
you want to configure and click Apply To.... In the Select Board dialog box
that is displayed, all subnets containing the selected board type are displayed.
Select the boards where you want to configure the service package and click
OK.

 Click OK in the Confirm dialog box that is displayed asking you "Board
services will be interrupted if you configure a service package. Are you sure
you want to continue?" The Confirm dialog box will be displayed again for
confirmation. Click OK.

 The Configuring Service Package for Boards dialog box is displayed to show
the operation progress. After the operation is completed, the Operation
Result dialog box is displayed.
 Separate board configuration,this mode helps you to configure a service package
for a separate board.

 In the NE Explorer, choose Configuration > Service Package from the


Function Tree.

 Select Service Package, and Apply.

 Click OK in the Confirm dialog box that is displayed asking you "Board
services will be interrupted if you configure a service package. Are you sure
you want to continue?" The Confirm dialog box will be displayed again for
confirmation. Click OK.

 The Configuring Service Package for Boards dialog box is displayed to show
the operation progress. After the operation is completed, the Operation
Result dialog box is displayed.
 In the NE Explorer, select the board where the service package is configured and
choose Configuration > WDM Interface from the Function Tree. Check whether the
service type of the port corresponding to the board is configured according to the
specifications of the selected service package.
 In the NE Explorer, select the board where the service package is configured and
choose Configuration > Working Mode from the Function Tree. Check whether the
board working mode and port working mode are configured according to the
specifications of the selected service package.
 Configure the ODU1 service cross-connection between the TOA board 1 (Rx1/Tx1)
and the HUNQ2 board 1 (IN1/OUT1).

 In the NE Explorer, select an NE and choose Configuration >WDM Service


Management from the Function Tree.

 Click the WDM Cross-Connection Configuration tab, and then click Create.
The Create Cross-Connection Service dialog box is displayed.

 Set Level to ODU1 and set other parameters for the service.

 Click OK. The Operation Result dialog box is displayed, indicating that the
operation is successful. Click Close to complete the creation.

 Click Query. The created cross-connections are displayed in the WDM Cross-
Connection Configuration window.
 After searching the WDM Trails, verify that the WDM services are successfully
configured.

 In the Main Topology view, choose Service > WDM Trail > Manage WDM
Trail from the Main Menu.

 On the Basic Settings tab, select the level of the service being queried for
Level.

 Click Filter All. In Manage WDM Trail, Check whether the trails on the subnet
being queried are consistent with the network design.
 Only TOM, THA/TOA, and LOA can support service package.
 Reference answer:

 E2E service configuration supports cross-layer service trail creation. After a


client service trail is created, an electrical-layer server trail is automatically
generated. This reduces the number of trail creation times.
 Other related test instruments:

 GE/10GE/100GE tester: This meter is used to test the GE/10GE/100GE service


indicators.

 OTN tester: This meter is used to test OTN service indicators.

 SmartBits: This meter is used to test Ethernet service indicators.

 ESCON tester: This meter is used to test the ESCON service indicators.

 FICON/FC tester: This meter is used to test the FICON/FC service indicators.

 Phillips screwdriver: Used to remove screws from the board.

 Compressed gas dedusting agent: Used to clean the optical interface of the
board.

 The optical power of single wavelength in the multiplexed signals needs to be test
with an optical spectrum analyzer. The commissioning method is more accurate
and does not need to consider the influence of noise.

 Align the optical spectrum analyzer before using it to test the optical power. The
method to verify the alignment is as follows.

 Test the optical power of the OUT optical interface on the OTU with an
optical spectrum analyzer and compare the value with the value tested by an
optical power meter. If the difference is less than 0.5dB, the alignment is
acceptable. If not, align the optical spectrum analyzer again.
 You can use the iManager U2000 or OptiX iManager U2000 Web LCT for
deployment commissioning and configuration. All the operations supported by the
Web LCT can be performed on the U2000. However, the Web LCT has a relatively
low requirement on the computer hardware and is started quickly.
 Checking the Cabinet Reinforcement

 Cabinet fixing components are correctly installed with required bolts/screws.

 All associated bolts are safely tightened

 Checking the Board Installation

 Board connectors have no scratches, holes, or damage.

 All boards are fully seated in the appropriate slots and locked in the slots
with the ejector levers on their front panels.

 Checking the Cable Routing

 The bending radius of cables is larger than or equal to 60 mm, and they
should be bound at the bend.

 The power cables, PGND cables, and signal cables for a cabinet are routed in
different bundles, with an interval of more than 30 mm between each bundle.

 Checking the Cabinet Doors and Side Panels

 The front and rear doors and the side panels have been installed correctly.

 The front and rear doors are easy to open and close.

 Minimum bending radius for G.652 fiber is 30mm. Minimum bending radius for
G.657A2 fiber is 10mm. When fibers are connected to CRPC boards, the bending
radius of the fibers must be greater than or equal to 50 mm.
 Note: The indicators on the top of the NG WDM equipment cabinet are driven by
the LAMP port in the subrack. Therefore, only after the subrack is powered on are
the indicators on.

 Checking the subrack power on

 Powering on the subrack (The green indicator stays on.)

 Checking the fan (FAN indicator is always green.)

 Checking fiber attenuation

 Fiber connection between OTU (client-side) and the ODF

 Between FIU (line-side) and the ODF

 Between two subracks

 Checking the fan procedure

 When the subrack is powered on, the fan starts to run. Check the air
ventilation at the top and the bottom of the subrack.

 Observe the FAN indicator on the front panel of the fan. Normally, it is
always green. If the FAN indicator is constantly red, it indicates that two or
more fans are faulty. If the FAN indicator is constantly yellow, it indicates that
one fan is faulty. Clear the fault before continue with the commissioning.
 The principles for configuring the FOA for the OSC board are as follows:

 Configure the FOA at the TM1/TM2 port on the ST2/AST2 board or the TX
port on the DAS1 board.

 Configure the FOA at the TM port on the SC1 board or the TM1/TM2 port on
the SC2 board.

 Configure the 10 dB FOA at the TM1/TM2 port on the ST2/AST2 board or the
TX port on the DAS1 board. Configure the 10 dB FOA at the RM port on the
FIU board

 After the optical supervisory channel(OSC) commissioning is complete, the


communication between NEs is normal. You can use the U2000 to complete basic
NE and network configurations and prepare for commissioning optical power.
 Connecting to the NMS computer

 Check the network cable. One end is connected to the network port of the
NMS computer, and the other end is connected to the NM_ETH1/NM_ETH2
port (OSN 8800/universal platform subrack) of the equipment. If the OSN
1800/9800 is used, connect the other end to the NM port on the device.

 Changing the NE ID and IP Address

 The ECC protocol uses the NE ID as the unique identifier of an NE. The U2000
uses the NE ID as the search keyword when identifying different NEs in the
GUI and database. During network planning, a unique NE ID must be
assigned to each NE. If NE IDs conflict, ECC routes conflict and some NEs
cannot be managed. During commissioning or capacity expansion, you need
to modify the original planning and change the NE ID on the U2000.

 Setting the manual extended ECC communication

 When no optical path is available between two or more NEs, you can use the
Ethernet port of the NE to implement extended ECC communication.
 Basic requirements of optical power commissioning are as follows:

 The optical power under commissioning should be between the allowable


maximum and minimum values.

 Allowance is required to ensure that the power fluctuation within a range


brings no impact on the services.

 Optical power commissioning should meet the requirement of system


expansion from the customer.

 Consider the power compensation value: Offset

 Requirements of CWDM Commissioning:

 The CWDM network does not support the OA. Only the optical power
commissioning is needed in CWDM commissioning. The OSNR and flatness
commissioning are not needed.

 Only the received optical power of the OTU is needed in optical power
commissioning. The specific commissioning requirements, procedures are
similar to these of the DWDM system.
 General consequence of optical power commissioning

 Optical power commissioning is to commission the optical power values of


NEs and boards one by one according to the optical signal flow, and to
remove the abnormal attenuation of lines or boards according to the
requirements on optical power, gain and insertion loss of the board. The
commissioning is performed according to the requirements of optical power
commissioning for the optical amplifier unit, OTU, OSC boards.

 Optical power commissioning procedures

 Generally, the sites between the two OTMs in the NG WDM system form one
network segment. One network segment has two signal flow directions of
transmit direction and receive direction.

 The NG WDM system commissions the optical power of each NE one by one
according to the signal flow in each network segment.

 Firstly, complete the optical power commissioning of one OTM in transmit


direction. Then commissions the optical power of each downstream NE one
by one and complete the optical power commissioning of destination OTM in
receive direction. Finally, complete the optical power commissioning of the
other signal flow in the reverse direction of the previous signal flow.
 Note: When the input optical interface on the WDM side of the line board is
connected to the optical fiber, the input optical interface must be falsely inserted.
Otherwise, the input optical power may exceed the overload point and the receive
optical module may be burnt. The overload optical power of the APD receiver laser
is only -9dBm. Therefore, exercise caution when performing this operation to
prevent the optical module from being burnt due to high power.

 Commissioning Requirements

 Before entering the corresponding line board, adjust the input optical power
of the WDM side optical interface (IN) of the line board to the optimal
receive range. (Sensitivity +3) dBm~ (overload point – 5) dBm

 Generally, the output optical power on the WDM side of the line board does
not need to be commissioned. In the case of the OADM site or wavelength
protection, you need to adjust the VOA at the output optical interface on the
WDM side of the line board so that the gain flatness of each wavelength that
passes through the optical amplifier board is ≤2dB.

 The optical amplifier board at the front end of the line board at the receive
end has output the standard per-channel optical power. Therefore, you can
determine whether to add, change, or remove the fixed optical attenuator at
the input end of the line board according to the actual input optical power.
 Some boards of the NG WDM equipment provide the MON port. In the main
signal, a few monitoring signals are transmitted through the MON port, which is
used to monitor the performance of the optical signals online.

 The MON port power of the M40V/D40 and ITL boards is 10/90 of the OUT
port power. That is, the MON power is 10 dB lower than the OUT port power.

 The MON port power of the FIU board is 1/99 of the OUT port power. That is,
the MON port power is 20 dB lower than the OUT port power.
 The preset value is calculated based on the nominal single wavelength output
optical power of the receive end amplifier, nominal single wavelength input power
of the transmit end amplifier, and internal insertion loss of the pass-through
ROADM board.

 When creating a single-station optical cross-connection, you are advised to select


the automatic optical power adjustment mode (OPA mode). For application
scenarios that do not support automatic adjustment, select the manual adjustment
mode.

 When the automatic optical power adjustment mode is selected, the optical power
at the OUT port on the optical amplifier board at the receive end and the rated
optical power at the IN port on the transmit optical amplifier board have default
values.
 Adjusting the input optical power of OA board

 Adjust the average single λ input power of the IN interface of the OA and
make it close to the typical input power of single λ. Ensure that the number
of λs whose power is larger than the typical value is extremely close to the
number of λs whose power is smaller than the typical value.

 If the average single λ input power before the input end of the OA is added
with a VOA is higher than the typical input power of single λ, adjust the VOA
before the OA to make the average single λ input power reach the typical
value. Otherwise, no VOA is needed.

 Adjusting the gains of OA

 For the OAU1, set the gain to ensure that the mean output power equals the
maximum output power of single λ, which is 4 dBm. Gain = Maximum output
power of single λ – Mean input power of single λ.

 After setting the gain, use the OSA to check whether the mean output power
of single λ is within the range from 3.5 dBm to 4.5 dBm. If it exceeds this
range, finely tune the gain value.

 If the mean output power of single λ is more than 4.5 dBm, decrease the gain
value to adjust the mean output power of single λ to 4 dBm. If it is less than
3.5 dBm, increase the gain value to adjust the mean output power of single
wavelength to 4 dBm. The allowable deviation is within ±0.5 dBm.
 Typical input power of single wavelength of the OBU101 is –23 dBm (80λ system).

 Typical input power of single wavelength of the OBU103 is –22 dBm (80λ system).

 Typical input power of single wavelength of the OBU104 is –20 dBm (80λ system l).

 Typical input power of single wavelength of the OBU205 is –19 dBm (80λ system).

 Typical input power of single wavelength of the OAU101 is –19 dBm (80λ system).

 Typical input power of single wavelength of the OAU103 is –23 dBm (80λ system).

 Typical input power of single wavelength of the OAU105 is –19 dBm (80λ system).

 Typical input power of single wavelength of the OAU105 is –15 dBm (80λ system).
 Note: In this case, the maximum signal gain that can be set by the OAU1 is as
follows: 31dB-5dB=26dB.

 Gain= 4dBm – (–20dBm) = 24dB<26dB, which meets the requirements.


 The pump output optical power of the Raman amplifier is high. The higher the
optical power is, the higher the requirement for the tail fiber is, which may cause
damage to the equipment and human body. Therefore, the pump light of the
RAMAN should be as low as possible when the switch gain ≥10dB is ensured, the
maximum optical power should be ≤29dBm.

 The reverse or forward output optical power of the Raman amplifier reaches
27dBm. Before using the Raman board, shut down the laser of the Raman board.

 The connector of the optical fiber connector must use a dedicated APC fiber
connector. If the PC fiber connector is used, a large reflection will be generated
and the fiber connector will be burnt.

 For the Raman optical amplifier board of the backward pump, the strong pump
light enters the optical fiber instead of the output end (SYS) from the input end
(LINE). Do not add non-fiber devices such as attenuators and jumpers before the
input end.

 The bending radius of the fiber jumper must meet the requirements. Otherwise,
the tail fiber may be burnt.

 Before turning on the laser of the Raman board, you must connect the jumper of
the input end and the jumper of the corresponding customer ODF cabinet. When
removing and inserting the optical fiber, ensure that the fiber connector is clean. If
the connector is dirty, the optical connector may be damaged.
 System commissioning is based on OCh. Therefore, you need to create an end-to-
end OCh trail first. When creating an OCh trail, set the OPA mode to manual.

 The signal flow varies according to the type of sites. The above figure takes the
add signal flow at the OTM site as an example.
 Wavelength expansion will reduce the power of adjacent wavelengths. Therefore,
you need to fine-tune the existing wavelengths to ensure flatness. If the
performance of adjacent channels deteriorates to the threshold, the expansion
may interrupt services. Therefore, you need to check the performance of all
existing wavelengths before expansion and perform the operations in the period
that has the minimum impact on services.
 If there is no pass-through wavelength, you do not need to adjust the flatness in
the wavelength dropping direction. This ensures that the gain of the optical
amplifier at the receive end of the local station can compensate for the
attenuation of the fiber between the upstream stations.

 For the 80λ system, the ITL board is configured before the demultiplexer. If the
input optical power is low after the wavelength passes the ITL and demultiplexer,
adjust the output optical power of the OA board at the receive end. If the input
optical power of the optical amplifier at the receive end is too low, check the
optical fiber onsite.

 After the wavelength dropping commissioning is complete, check whether the


input optical power of the OTU board is within the required range and whether the
FEC BER of the OTU board is within the required range.

 After the commissioning is complete, use the same method to adjust the reverse
direction to ensure that the bidirectional performance meets the requirements.
 mW and dBm are the absolute value for optical power.

 dB is the relative value for optical power. When we describe the values of gain and
attenuation, it is used.
 Note: The formula is based on the condition the optical power of each single
wavelength is same, that means flat.

 Ptotal means total wavelength power. P1/P2 means single wavelength power.
 The typical input/output power of OBU103 for single channel -19/+4dBm, the gain
is 23dB.

 Commissioning steps:

 1. Adjust the VOA① to make sure the optical power or each channel at
OBU103 is -19dBm.

 2. Use the OSA (optical spectrum analyzer) linked to MON port of OBU103 or
M40 to check the flatness of the spectrums.

 3. The gain flatness of each wavelength should be <6.0dB.


 The typical input/output power of OAU101 for single channel is -16/+4dBm.

 When the input power is higher than the typical power(-16+10lgN) , we should
adjust the VOA.

 When the input power is lower than the typical power (-16+10lgN) , we should
remove the VOA, and change the gain (20~31) to satisfy the output power is
typical (+4+10lgN).

 Commissioning steps:

 1. Adjust the VOA① to make sure the total power at OAU1 is close to (-16 +
10lgN) dBm

 2. Use the OSA (optical spectrum analyzer) linked to MON port of OAU1 to
check the flatness of the spectrums.

 3.Change the gain of OAU1 to make sure the output power is close to (+4 +
10lgN) dBm
 The typical input/output power of OBU103 for single channel is -19/+4dBm, the
gain is 23dB.

 Commissioning steps:

 1. Adjust the VOA① to make sure the total power at OAU101 is as closer as (-
16 +10lgN)dBm

 2. Use the OSA (optical spectrum analyzer) linked to MON port of OAU101 to
check the flatness of the spectrums.

 3. The maximum difference of optical power among all the channels is as


small as possible.

 4. Change the gain of OAU101 to make sure the output power is close to
(+4+10lgN)dBm

 5. Add fixed attenuator② to the fiber which will be inserted to the IN port of
OTU

 6. Adjust VOA③ to make sure the power of passing-through channels at


OBU103 is typical (-19+10lgN).

 7. Adjust VOA on MR8V to make sure the power of added channels at


OBU103 is typical (-19+10lgN).

 8. Use the OSA (optical spectrum analyzer) linked to MON port of OBU103 to
check the flatness of the spectrums
 Commissioning steps:

 1. Adjust the VOA① to make sure the total power at OAU101 is as closer as (-
16+10lgN)dBm

 2. Use the OSA (optical spectrum analyzer) linked to MON port of OAU101 to
check the flatness of the spectrums.

 3. The maximum difference of optical power among all the channels is as


small as possible.

 4. Change the gain of OAU101 to make sure the output power is close to
(+4+10lgN)dBm

 5. Add fixedattenuator② to the fiber which will be inserted to the IN port of


OTU

 6. Adjust VOA③ to make sure the power of passing-through channels at


OBU103 is typical (-19+10lgN).

 7. Adjust VOA on M40V to make sure the power of added channels at


OBU103 is typical (-19+10lgN).

 8. Use the OSA (optical spectrum analyzer) linked to MON port of OBU103 to
check the flatness of the spectrums.
 Commissioning steps:

 1. Adjust the VOA① to make sure the total power at OAU101 is as closer as (-
16+10lgN)dBm

 2. Use the OSA (optical spectrum analyzer) linked to MON port of OAU101 to
check the flatness of the spectrums.

 3. The maximum difference of optical power among all the channels is as


small as possible.

 4. Change the gain of OAU101 to make sure the output power is close to
(+4+10lgN)dBm

 5. Add fixed VOA② to the fiber which will be inserted to the IN port of OTU

 6. Adjust VOAs in WSMD4③ to make sure the power of added and passing-
through channels at OBU103 is typical (-19+10lgN).

 7. Use the OSA (optical spectrum analyzer) linked to MON port of OBU103 to
check the flatness of the spectrums.
 The typical input/output power of OAU1 for single channel is -16/+4dBm.

 The insertion loss of D40 is about 6.5dB.

 When the input power is higher than the typical power(-16+10lgN) , we should
adjust the VOA.

 When the input power is lower than the typical power (-16+10lgN) , we should
remove the VOA, and change the gain (20~31) to satisfy the output power is
typical (+4+10lgN).

 Commissioning steps:

 1. Adjust the VOA① to make sure the total power at OAU1 is close to (-16 +
10lgN) dBm.

 2. Use the OSA (optical spectrum analyzer) linked to MON port of OAU1 to
check the flatness of the spectrums.

 3. The maximum difference of optical power among all the channels is as


small as possible.

 4. Change the gain of OAU1 to make sure the output power is as closer as
(+4 + 10lgN) dBm.

 5. Add fixed attenuator to the fiber which will be inserted to the IN port of
OTU.
 The commissioning of the 40G/100G coherent system is similar to that of the
10G/40G non-coherent system. However, some specific boards are added to the
coherent system, and the requirements for the incident optical power are different.
 In a coherent and non-coherent hybrid system, comply with the rules for
commissioning non-coherent signals with precedence.

 In an optical transmission system, after the optical power of the transmit-end OA


board is adjusted to the nominal output optical power, the incident optical power
needs to be determined and commissioned. (Fiber access scenarios include the
mainstream fiber access scenario and special fiber access scenario).
 The objective of OSNR commissioning is to ensure that the OSNR for every
wavelength is higher than the design OSNR tolerance. OSNR tolerance refers to
the tolerance at which the boards at the receive end cannot restore the error-free
carrier signals when the OSNR is lower than a specified threshold. In certain special
situations, this objective can be properly adjusted, but a certain OSNR margin
must be ensured. By adjusting the OSNR, the lowest OSNR for the wavelengths
that have the same source and sink can be improved. Note that the wavelengths
that have different sources or sinks have different OSNRs. The detected OSNR
value may be incorrect if there is a parallel OADM station using M40/D40, WSMD4,
or WSM9+WSD9 boards on the link. Therefore, OSNR commissioning should be
performed only when there is no parallel OADM station on the link.
 Commissioning of incident optical power must comply with the following rules:

 Incident optical power counters are adjusted based on the network design.

 Incident optical power counters refer to the mainstream incident optical


power of each fiber by default.

 High special incident optical power and low special incident optical power are
used in special network scenarios. They must be adjusted based on the
network design.

 When adjusting incident optical power counters and optical power of the
downstream OA board, ensure that the incident optical power counters meet
requirements.

 If the output optical power of the downstream OA board meets the


requirement in case of the minimum gain, the upstream incident optical
power can be less than the incident optical power counters.

 If the output optical power of the downstream OA board does not meet
the requirement in case of the minimum gain, preferentially reduce the
upstream EVOA attenuation to a value within the range required by
incident optical power counters.

 If the output optical power of the downstream OA board still does not
meet the requirement, increase the gain of the downstream OA board.
 EDFA Specification:

 RAU106: The gain range of the integrated EDFA is 14–23 dB, different from
that of the OAU106 board.

 RAU201: The integrated EDFA is the OAU101 board, so specifications of EDFA


are the same as those of the OAU101.

 RAMAN Specifications:
Raman Specification of
Item Unit
RAU106/RAU201
Operating wavelength range nm 1528.5–1561.5
5–10. Raman gain mode can be set
G.652 dB
to the maximum gain mode.
Effective gain 5–12. Raman gain mode can be set
LEAF dB
range to the maximum gain mode.
5–12. Raman gain mode can be set
G.653 dB
to the maximum gain mode.
Input optical power range ≤1
Maximum total output optical
dBm 6
power in gain lock mode
Single-wavelength input optical
dBm –40–1
power range
Total input optical power range
when a system is fully configured dBm –24–1
with wavelengths
 Check whether line quality is good, whether end faces of connectors are clean, and
whether ports are connected properly.

 Select an appropriate gain mode for Raman based on application scenarios and
optical fiber types.

 Gain lock mode: applies to multi-span systems and scenarios where types of
supported optical fibers are defined.

 Maximum gain mode: applies to single-span systems and scenarios where


types of supported optical fibers are defined.

 Power lock mode: applies to scenarios where types of supported optical


fibers are not defined or CRPC is replaced with Raman.

 Set Raman gain based on optical fiber types and gain modes.

 RAU106: Adjust Raman gain (within a permitted range) based on the input
optical power reported by the integrated OA to control the input optical
power of downstream integrated OA, which is similar to adjusting the input
optical power of OAs using EVOAs.

 RAU201: The EVOA on the RAU201 board can control the input optical power
of the integrated OA. Therefore, you can set Raman gain to the maximum
gain of supported optical fibers.
 The Raman amplifier optical module supports the gain locking, maximum gain,
and power locking modes.

 In gain locking mode. This mode applies to multi-span or single-short-span


systems that use G.652/G.653/LEAF/TWRS/TW-
C/TWPLUS/SMFLS/G.656/G.654A/TERA_LIGHT/G.654B fibers. In addition, the
gain of the Raman unit is tunable and users can query the actual gain of the
Raman unit.

 The maximum gain mode applies to ultra-long-single-span systems that use


G.652/G.653/LEAF/TWRS/TW-
C/TWPLUS/SMFLS/G.656/G.654A/TERA_LIGHT/G.654B fibers. In this mode,
the Raman amplifier module automatically adjusts its pump power to ensure
that its gain reaches the maximum value. In addition, users can query the
actual gain of the Raman amplifier module.

 In the pump power mode. This mode applies to systems that use any fibers
but the G.652/G.653/LEAF/TWRS/TW-
C/TWPLUS/SMFLS/G.656/G.654A/TERA_LIGHT/G.654B fibers or to situations
in which the pump power of the Raman amplifier module must be adjusted
manually.
 The following table provides prohibited configurations and connections of RAU
boards.

Prohibited Reason
Configuration
DCM/DCU is This configuration affects system OSNR and causes
configured inappropriate Raman gain control.
between Raman
and EDFA.
OLP 1+1 When a fiber that is far from Raman or an upstream
protection fiber is broken, the noise power is amplified by
Raman. As a result, the optical power difference
detected by OLP is within the threshold range and no
protection switching will be performed.
Single-site RAU RAU cascading does not meet requirements of
cascading Raman amplification. The second Raman does not
have gain medium, so the second Raman cannot
amplify the optical power.
 Specifications of OAU106

Item Unit Specification


Operating wavelength range nm 1528.5-1561.5
Nominal gain dB 16 19 23
Total input optical power
dBm -24 to 4 -24 to 1 -24 to -3
range
Per-channel 40 channels dBm -24 to -12 -24 to -15 -24 to -19
input optical
power range 80 channels dBm -24 to -15 -24 to -18 -24 to -22

Nominal 40 channels dBm -12 -15 -19


single-
wavelength
input optical 80 channels dBm -15 -18 -22
power
Noise figure (NF) dB ≤7 ≤6 ≤5
Channel gain dB 16-23
Gain flatness dB ≤2.0 ≤2.0 ≤2.0
Fixed insertion loss dB ≤1.5
VI-
VO Dynamic attenuation
dB 20
range
Split ratio of the MON port dB 20±1.5
 Precautions for fiber connections:

 TN12TD201 boards apply to coherent 40-channel/80-channel systems only.


DMx ports can be connected to coherent OTU boards only.

 A TN12TD201 board can only receive 20 wavelengths. The IN port on the


TN12TD201 board cannot be connected to the DMx port on the
RDU9/WSMD4/WSMD9 board to prevent excessively high optical power of
multiplexed wavelengths dropped at the DMx ports on the TN12TD201.
 Before starting the deployment commissioning, check the design documents to
ensure that the designs, such as dispersion configuration and compensation
method, OSNR, ITL configuration, and channel allocation for mixed transmission of
10G, 40G, and 100G signals, meet the requirements for setting up the coherent
transmission system.

 To ensure security of the NE data, you can back up the commission result after
optimize performance of each wavelength.
 IPA is one of automatic power management techniques, to prevent the light from
causing bodily injury, the product provides the IPA function to shut down the laser
on the affected OA as early as possible when a fiber breaks.

 When the RAU1 or RAU2 board is configured in the system to implement the IPA
function, two methods are available for configuring IPA:

 Set the RAU1 or RAU2 board as the Raman amplifier, the TN14FIU and OSC
boards as the Auxiliary Detection Board, and leave Detection Board
blank(Recommended) .

 Set the RAU1 or RAU2 board as the Raman amplifier, the OA, RAU1 or RAU2
boards as the Detection Board, and the regeneration boards or OSC boards
as the Auxiliary Detection Board.

 The APE function ensures the optical power flatness at the receive end, which
ensures the signal-to-noise ratio. The APE test is performed to determine if the
APE function is started. When the flatness of the optical power for each channel at
the receive end differs significantly from that configured in deployment
commissioning, the APE function can automatically adjust the optical power of
each channel at the transmit end. This ensures that the flatness of the optical
power at the receive end is closer to that configured in deployment commissioning.
Reconfigurable optical add and drop multiplexer boards and optical multiplexer
boards supporting APE function are the M40V, WSM9, WSMD2, WSMD4, WSMD9,
ROAM and RMU9 boards.
 Operation procedure:

 Step 1: Choose Administration > NE Software Management > NE Data


Backup/Restoration from the main menu.

 Step 2: Right-click in the NE View list. A shortcut menu is displayed.

 Step 3: Click Backup. The Backup dialog box is displayed.

 Step 4: Select the NM Server option button or NM Client to back up the


selected device information.

 Step 5: (Optional) If you select the NM Client option button, click to


select the path for backing up the device data.

 Step 6: Click Start. The NE View tab page displays the backup progress.

 Step 7: After the backup is successful, the U2000 creates the following
directories and files in the specified path:
NEName/yyyymmddhhmmss/dbf.pkg. In the preceding information, NEName
indicates the NE name, yyyymmdd indicates the year, month, and day, and
hhmmss indicates the backup time.
 Reference answer:

1. The optimal receive range of the input optical power is as follows: (Sensitivity +3) to
(overload –5) dBm
 The NG WDM equipment supports the following equipment-level protection
schemes: Power backup, fan redundancy, cross-connect board backup, system
control communication board backup, and clock board backup.

 The Optix OSN 1800 subrack, Optix OSN 8800 T16, 8800 platform subrack,
universal platform subrack, Optix OSN 9800 P18, and OSN 9800 universal platform
subrack use two PIU boards to supply power to the entire system in hot backup
mode. When one PIU board is faulty, the system can still work properly. The Optix
OSN 1800 subrack, OptiX OSN 8800 universal platform subrack, and OptiX OSN
9800 universal platform subrack support AC power supply. When the AC power
supply is used, the corresponding power board is APIU.
 The Optix OSN 8800 T32, T64, and Optix OSN 9800 U16, U32, and U64 subracks
use the partitioned power supply mode. Each subrack is configured with PIU
boards to back up each other.
 The power supply voltage of the OptiX OSN 9800 U32 subrack is -48V/-60V DC
power. The current can be flexibly configured. In this example, the areas with the
same background color are the same partition. A1 and B1, A2 and B2, A3 and B3,
A4 and B4, A5 and B5 back up each other to supply power to the subrack. If any
external input -48V/-60V power supply is faulty, the normal operation of the
equipment is not affected.
 The OSN 1800V system control, cross-connect, and timing board supports 1+1
active/standby protection. The active and standby cross-connect boards back up
each other. The active cross-connect board and the standby cross-connect board
are connected to the service cross-connect slots through the backplane bus to
protect the services.

 The SCC and cross-connect boards of the OptiX OSN 8800 support 1+1
active/standby protection.
 The OSN 9800 U16, U32, and U64 control boards support 1+1 active/standby
protection, and the cross-connect board uses M:N backup.

 The system control and cross-connect board area of the OSN 9800 U64 consists of
14 cross-connect boards (XCS) and two SCC boards (CTU). The cross-connect
board adopts the 2:12 backup mode, which provides cross-connections for the
service boards on the front and rear sides.

 The OSN 9800 U32 and U16 control boards and cross-connect boards have seven
cross-connect boards (XCS) and two SCC boards (CTU). The cross-connect board
adopts the 2:5 backup mode to provide cross-connections for the service board.

 Fan redundancy: When any fan in the fan tray is faulty, the system can run for 96
hours at the ambient temperature of 0°C~45°C.
 The WDM equipment provides two types of optical line protection: 1+1 and 1:1.
The 1+1 optical line protection is implemented by the dual fed and selective
receiving function of the OLP board. According to the position of the OLP board
on the network, the protection segments are different, including 1+1 OMS trail
protection and 1+1 OTS trail protection.

 The 1:1 optical line protection is implemented by the OLSP/OLSPA/OLSPB board.


The OLSP/OLSPA/OLSPB board provides two channels of optical channels. In
normal cases, the service signals are transmitted on the main optical channel.
When the optical fiber of the active optical line is faulty, the standby optical path
uses the auxiliary light source of the board for fiber line detection. If the standby
optical path is normal, the service is switched to the standby optical channel
through the optical switch inside the OLSP/OLSPA/OLSPB board.

 1:1 protection uses auxiliary light sources instead of optical signals to split optical
signals into two channels. Therefore, the insertion loss introduced in 1:1 protection
is smaller than that in 1+1 protection.

 1+1 optical line protection: The switching time is ≤50ms, excluding the detection
time.

 The following devices can support 1:1 OLP:

 OSN 6800/OSN 8800T16/T32/T64/OSN 8800 UPS/OSN 9800.

 OSN 1800 cannot support OLSP board.


 There are two application modes for optical line protection:

 Application mode one: 1+1 OTS path protection

 Application mode two: 1+1 OMS parth protection


 Optical line protection range: Line fiber, that is, the fiber between the source OLP
board and the sink OLP board.

 Optical line protection is used to protect the optical fibers between adjacent sites
by using the separated route. Therefore, the optical line protection is meaningful
only in the chain network. In the ring topology, services between sites can be
protected by using different routes of the ring network. Therefore, optical line
protection is not used.

 Optical line protection is performed by segment. For example, if fibers between A


and B are broken, only OLP switching between A and B is triggered, and OLP
switching between downstream BC and CD sites is not triggered.

 In optical line protection, OLP is generally configured before the outbound fiber
(1+1 OTS path protection), that is, after FIU (OSC) or OA (ESC). Theoretically, it can
be configured between the amplifier and the multiplexer/demultiplexer board
(1+1 OMS path protection). In this case, two sets of OA boards are required, which
is not recommended. If both B and C are pure OA sites, OLP boards can be used
only at sites A and D, and OLP boards are not used on sites B and C. However,
there are two problems: 1. B and C need to use two sets of OLA. 2. This
configuration depends on the automatic shutdown function of the OA board.
However, this function has a certain delay, which prolongs the switching time.
 The switching modes include revertive and non-revertive. Revertive indicates that
services are automatically switched back to the working channel after the working
channel recovers, and non-revertive indicates that services are not automatically
switched back to the working channel after the working channel recovers. By
default, the optical line protection is non-revertive.
 In the case of the 1+1 optical line protection, the switching mode can be set to
single-ended switching or dual-ended switching. When the switching mode is set
to single-ended switching, the APS protocol is not required. And when the
switching mode is set to dual-ended switching, the APS protocol is required.
 For 1+1 single-ended switching, both POWER_DIFF_OVER and MUT_LOS are SF
conditions. For 1+1 dual-ended switching, MUT_LOS is used as the SF condition,
and POWER_DIFF_OVER is used as the SD condition.
 The conditions for triggering the automatic switching of the 1+1 optical line
protection are as follows:
 MUT_LOS: the input optical power is lost. When the OLP board fails to detect
the optical power, the protection switching is triggered. The threshold can be
set. The default value is -35dBm.
 POWER_DIFF_OVER: the difference between the input optical power of the
working channel and that of the protection channel crosses the threshold.
The difference threshold of the working and protection channels can be set.
The value range is 3~8dB, and the default value is 5 dB. The initial difference
between the working and protection channels needs to be manually set.
Range: -10dB~10dB.
 The “active and standby channel difference" refers to the received power
difference between the active and standby optical channels. The “active and
standby channel difference threshold” refers to the threshold of the received
power difference range between the active and standby optical channels.

 On the engineering site, the quality and situation of the active and standby optical
channels cannot be guaranteed to be totally the same. Some difference is allowed
and reasonable. Hence, the NG-WDM provides the initial difference of the active
and standby channels that can be set to eliminate the difference of the
engineering design. This difference is called “active and standby channel
difference threshold”, which can be set using the NM system.
 Configuration procedure:

 In the NE Explorer, select an NE and choose Configuration > Port Protection


from the Function Tree.

 In the Port Protection window, click New. In the Confirm dialog box, click OK.
The Create Protection Group dialog box is displayed. Select a protection type.
The available protection types are optical line protection, intra-board 1+1
protection, and client-side 1+1 protection.
 By receiving and transmitting signals on both the working channel and protection
channel, OLSP provides protection for services on the working channel. In normal
situations, protected service signals are transmitted and received using the
working channel, and the auxiliary light source transmits C-band auxiliary optical
signals to the protection channel to monitor the status of the protection channel.
When a fault occurs on the working channel, both the transmit and receive
switches undergo a switchover, the protected service signals are switched to the
protection channel, and C-band auxiliary optical signals are switched to the
working channel so that the status of the working channel can be monitored.
 signal fail (SF): If 1:1 optical line protection is configured, the OLSP/OLSPA/OLSPB
board reports MUT_LOS.

 1:1 OLP only supports OTS.

 The APS protocol and overhead bytes need to be forwarded between the
OLSP/OLSPA/OLSPB and ST2/AST2 through the system control board. All nodes
must be configured with system control boards.

 When 1:1 protection is configured, the OLSP/OLSPA/OLSPB, ST2/AST2, SCC, and


STG boards must be configured in the same subrack.
 The range of intra-board 1+1 protection is as follows: Fiber on the OCh trail, that is,
the fiber between the source OTU board and the sink OTU board.

 OTU dual fed and selective receiving: support revertive mode/non-revertive mode.
The default mode is non-revertive mode.

 OTU+OLP dual fed and selective receiving: only support non-revertive mode.
 Intra-board 1+1 protection applies to chain and ring topologies:

 In the case of chain networking, intra-board 1+1 protection is similar to


optical line protection. Therefore, separate routes need to be provided
between adjacent sites.

 In the case of a ring network, intra-board 1+1 protection uses the separated
path on the ring network to protect services. That is, services are transmitted
clockwise and counterclockwise on the ring and finally reach the destination
node.

 The range of intra-board 1+1 protection is as follows: The optical fiber on the OCh
trail, that is, the fiber between the source OTU board and the sink OTU board,
cannot protect the OTU board.
 R_LOS triggers protection switching. Set the R_LOS alarm threshold on the U2000
based on the engineering requirements. The default value is -35dBm.
 OTU+OLP Mode: The optical power difference cannot be used as the switching
condition.
 Procedure:

 In the NE Explorer, select the NE and choose Configuration > Port Protection
from the Function Tree.

 In the Port Protection user window, click New. In the Confirm dialog box, click
Yes. The Create Protection Group dialog box is displayed. Select Intra-Board
1+1 Protection from Protection Type. Enter the other parameters of the
protection group one by one.

 For the unidirectional switching of intra-board 1+1 protection that uses


OLP/DCP/QCP, select the two WDM-side optical ports of the
OLP/DCP/QCP as Working Channel and Protection Channel respectively.

 For the bidirectional switching of intra-board 1+1 protection that uses


OLP/DCP/QCP and LDX, select the two WDM-side optical ports of the
OLP/DCP/QCP as Working Channel and Protection Channel respectively.

 Click OK. In the displayed dialog box, click Close. The created protection
group is displayed in the window.

 Start the NE Explorer of the opposite NE. Repeat the preceding steps.
 The client-side 1+1 protection of the OTU uses the dual fed and the selective
receiving function of the OLP/SCS board to protect the OTU board and units after
it.

 The client-side 1+1 protection performs the switching based on the client-side
ports. And it has the larger extent of protection that other protection types.

 We can see from this figure that the client-side 1+1 protection of the OTU actually
has two configurations. One does not require using the centralize cross-
connection and the other requires using it.

 the software can perform the switching at different granularities based on the
effect of the fault. When the line fiber is damaged, the switching is performed for
the entire board. In this case, it is the same with that of the OTU intra-board 1+1
protection in effect. When the certain channel of client-side fiber is broken,
perform the switching on the client-side optical port that is affected.

 In the master/slave mode, when the active and standby OTUs are in the same
subrack, the OLP and SCS can be used; when the active and standby OTUs are not
in the same subrack or in the same NE, only the OLP can be used. When the active
and standby OTUs are not in the same subrack, the NE software cannot judge the
subrack power-off and communication failure between subracks. The former one
requires the switching and the latter one does not. Hence, when the OTU is
applied in the subrack, it is necessary to use the OLP because this can have the
OLP switching triggered when there is no optical signal due to the power-off.
 Client 1+1 protection scenarios:

 Intra-subrack protection: The working and protection OTUs are located in the
same subrack.

 Inter-subrack protection: The working and protection OTUs are located in


different subracks of one NE.

 Inter-NE protection: The working and protection OTUs are located in


different NEs.

 Multi-vendor protection: The working and protection OTUs are from Huawei
and a third party respectively.
 Trigger conditions: Board offline, signal fail (SF), signal degrade (SD).

 The board offline

 Removing and re-inserting the board.

 Cold resetting the board.

 Power outage on the board caused by a board fault or subrack power outage.

 Signal fail

 Alarms on OTU boards: R_LOF, R_LOS, R_LOC, HARD_BAD, OTUk_LOF,


OTUk_LOM, OTUk_AIS, OTUk_TIM, ODUk_PM_AIS, ODUk_PM_OCI,
ODUk_PM_LCK, ODUk_PM_TIM, ODUk_LOFLOM, ODUk_TCMn_AIS,
ODUk_TCMn_OCI, ODUk_TCMn_LCK, ODUk_TCMn_TIM, OPUk_CSF,
OPUk_MSIM, OPUk_PLM, and REM_SF.

 Alarms on OLP/DCP/QCP boards: R_LOS, POWER_DIFF_OVER.

 Signal degrade

 Alarms on OTU boards: B1_EXC, ODUk_PM_DEG, ODUk_PM_EXC,


ODUk_TCMn_DEG, ODUk_TCMn_EXC, and REM_SD.

 Alarms on the LWXS board: IN_PWR_HIGH, IN_PWR_LOW.


 The board offline

 Removing and re-inserting the board.

 Cold resetting the board.

 Power outage on the board caused by a board fault or subrack power outage.

 Signal fail

 Alarms on OTU boards: R_LOF, R_LOS, R_LOC, HARD_BAD, OTUk_LOF,


OTUk_LOM, OTUk_AIS, OTUk_TIM, ODUk_PM_AIS, ODUk_PM_OCI,
ODUk_PM_LCK, ODUk_PM_TIM, ODUk_LOFLOM, ODUk_TCMn_AIS,
ODUk_TCMn_OCI, ODUk_TCMn_LCK, ODUk_TCMn_TIM, OPUk_CSF,
OPUk_MSIM, OPUk_PLM, and REM_SF.

 Alarms on OLP/DCP/QCP boards: R_LOS, POWER_DIFF_OVER.

 Signal degrade

 Alarms on OTU boards: B1_EXC, ODUk_PM_DEG, ODUk_PM_EXC,


ODUk_TCMn_DEG, ODUk_TCMn_EXC, and REM_SD.

 Alarms on the LWXS board: IN_PWR_HIGH, IN_PWR_LOW.


 Procedure:

 In the NE Explorer, select the NE and choose Configuration > Port Protection
from the Function Tree.

 In the Port Protection user interface, click New. In the Confirm dialog box,
click OK. The Create Protection Group dialog box is displayed. Select Client
1+1 Protection from Protection Type, and set other parameters of this
protection group.

 Click OK. Then click Close in the dialog box that is displayed. The created
protection group is displayed in the window.

 Start the NE Explorer of the opposite NE. Refer to Steps 1 through 3 to create
the client 1+1 protection for the opposite NE.
 ODUk SNCP can be used in various forms of networking.
 When the line board or the line fiber is faulty, the fault detection point at the
receive end reports the event. The system control board controls the cross-
connect board to perform the switching.
 In this example, ODU1 services are transmitted from site A to site I. Sites A and I
can provide the dual transmitting and receiving function and are both configured
with the SNC/N protection. The protection switching is implemented according to
SM and PM states. The TCM is not activated. (The TCM is normally used to monitor
a certain path between the service transmitting and receiving nodes, such as a
path between nodes D and H. The source and destination nodes of the services do
no require TCM termination.)

 The DEGH is a separate area. The TCM1 is activated to monitor the transmitting
quality in this area. The area provides the dual transmitting and receiving function.
Sites D and H are configured with SNC/S protection that performs switching
according to the SM and TCM states.

 Two paths exist between sites B and F. Site C is an optical repeater station and
does not terminate the overheads. The protection switching is implemented
according to the SM states. Similar to the intra-OTU board protection, this area
provides the dual transmitting and receiving function. (Site C is an optical repeater
station. Therefore, the TCM is not required in this area.)
 Table 1 Alarms of SF
Alarms in
Alarms in the Alarms in the TCM
ODUk SNCP the SM Other Alarms
PM Section Section
Section
SNC/I Unsupported
ODUk_PM_AIS,
SNC/S OTUk_LOF, ODUk_TCMn_AIS, R_LOF, R_LOS,
ODUk_PM_LCK,
OTUk_LOM, ODUk_TCMn_LCK, HARD_BAD,
ODUk_PM_OCI,
OTUk_AIS, ODUk_TCMn_OCI, EXT_MODULE
ODUk_PM_TIM,
SNC/N OTUk_TIM ODUk_TCMn_TIM, _OFFLINE
ODUk_LOFLOM
ODUk_TCMn_LTC

 Table 2 Alarms of SD

ODUk SNCP Alarms in the SM Alarms in the PM Alarms in the TCM


Section Section Section
OTUk_DEG, ODUk_PM_EXC,
SNC/I Unsupported
OTUk_EXC ODUk_PM_DEG
ODUk_TCMn_EXC,
SNC/S Unsupported Unsupported
ODUk_TCMn_DEG
ODUk_PM_EXC, ODUk_TCMn_EXC,
SNC/N Unsupported
ODUk_PM_DEG ODUk_TCMn_DEG

 ODUflex alarms are supported only in the PM section of the SNC/N protection
sub-type.
 Procedure:

 In the NE Explorer, select an NE and choose Configuration > WDM Service


Management from the Function Tree.

 On the WDM Cross-Connection Configuration tab, click Create SNCP Service.


The Create SNCP Service dialog box is displayed. Select ODUK SNCP from the
Protection Type drop-down list, and then select the corresponding service
type and other parameters.

 Click OK. The operation result shows that the operation is successful.

 Click Close.
 The SW SNCP protection protects inter-subnet services and requires no protocol.
The SW SNCP protection provides protection for topologies such as ring with
chain, tangent rings, intersecting rings. This ensures high flexibility in application.
 The switching process is similar to that of ODUk SNCP protection.
 There is a signal fail (SF) condition. SF includes the following:

 The optical module is offline (PORT_MODULE_OFFLINE).

 The board is offline: including the removing or cold resetting the board.

 Board-side alarms: R_LOF, R_LOS, REM_SF, OTUk_TIM, OTUk_LOM,


ODUk_PM_AIS, ODUk_PM_OCI, ODUk_PM_LCK, ODUk_PM_TIM, OTUk_AIS,
and OTUk_LOF.

 There is a signal degraded (SD) condition. SD includes the following board-side


alarms: B1_EXC, B1_SD, ODUk_PM_DEG, ODUk_PM_EXC, OTUk_DEG, and OTUk_EXC.
 Tributary SNCP protects client services accessed by tributary boards on an OTN
network. Tributary SNCP is implemented by using the dual fed and selective
receiving function.
 The tributary SNCP is similar to the ODUk SNCP, but the protection range is
different. Tributary SNCP services are dually fed and selectively received from two
tributary boards to one line board, thus protecting the equipment on the tributary
side.
 If you do not need to configure end-to-end protection for the ODU and do not
need to configure TCM subnet applications, select SNC/I.

 When the end-to-end protection of the ODU is not required but the TCM subnet
application needs to be configured, select SNC/S.

 When the end-to-end protection of the ODU is required, select SNC/N.

 The supported protection types vary with the type of the services accessed by the
tributary board:

 When the tributary board accesses the OTN services, the SNC/I, SNC/N, and
SNC/S are supported.

 When the tributary board accesses the STM-16 or OC-48 SDH service or SDH
or SONET services of a higher level, only the SNC/I is supported.
 Alarm indicating SF and SD conditions

Triggering Alarms in the Alarms in the Alarms in the TCM Other


Condition SM Section PM Section Section Alarms
SF OTUk_LOF ODUk_PM_AIS ODUk_TCMn_AIS R_LOF
OTUk_LOMOTU ODUk_PM_LCK ODUk_TCMn_LCK R_LOS
k_AIS ODUk_PM_OCI ODUk_TCMn_OCI R_LOC
OTUk_TIM ODUk_PM_TIM ODUk_TCMn_TIM HARD_B
ODUk_LOFLOM ODUk_TCMn_LTC AD
SD OTUk_DEG ODUk_PM_EXC ODUk_TCMn_EXC B1_EXC
OTUk_EXC ODUk_PM_DEG ODUk_TCMn_DEG
•SNC/I: Support SM section alarm
•SNC/S: Support SM section alarm and TCM section alarm.
•SNC/N(TCM): Support SM section alarm and TCM section alarm.
•SNC/N(PM): Support SM section alarm and PM section alarm.

 The triggering conditions of tributary SNCP protection are the same as those of
ODUk SNCP protection. When configuring tributary SNCP protection on the
U2000, set Protection Type to ODUK SNCP.
 Procedure:

 Select the NE in the NE Explorer, and choose Configuration > WDM Service
Management from the function tree.

 In the lower portion of the WDM Cross-Connection Configuration window,


click Create SNCP Service. The Create SNCP Service dialog box is displayed.
Select ODUk SNCP from the Protection Type drop-down list. Then, configure
the Service Type parameter.

 Click OK. The created protection groups are displayed on the interface.
 ODUk SPRing protection mainly applies to a ring network with distributed services
(for example, the services exist between neighboring sites). This protection uses
two different ODUk channels to achieve the protection of the distributed services.

 The ODUk SPRing protection applies to ring networks and thus requires the
support of a network protection protocol. The protection adopts dual-ended
switching mode, namely, when the receive end of the working channel fails, both
the receive and transmit ends of the working channel are switched to the
protection channels.

 Note: ODUk SPRing Protection is supported by OSN 6800/OSN 8800 T16/T32/T64.


 OptiX OSN 8800 supports two types of ODUk SPRing: Common style and
Enhanced style.

 For the common ODUk SPRing protection, if all protection channels of the
ring network are pass-through cross-connections, their performance cannot
be accurately detected. In this case, the protection fails. Hence, a
management node must be specified.

 For the enhanced ODUk SPRing protection, management nodes are not
essential to services.
 Each service is bound with a block ID to ensure service uniqueness.

 The management node must be bound with services.

 Only one node can be configured as the management node on the entire ring
network.

 Block ID: Block IDs are used to uniquely identify services on the ring network. The
IDs range from 1 to 31. The value 0 indicates the removal of binding relations. The
ODUk SP supports at most 32 sites. When the ring network is configured with 32
sites, services are added or dropped on two adjacent nodes and the pass-through
nodes are not involved.
 In the transmit direction: The client services to be protected are input through the
tributary board, cross-connected to the east working line board, and then dually
fed to the east and west protection line boards. In this way, the working signals
and protection signals are separated. After that, the working signals and
protection signals are respectively transmitted over the working and protection
channels.

 In the receive direction, when the operating is normal, only the cross-connection
of the working channel is enabled; that of the protection channel is disabled. When
the working channel is faulty, disconnect the cross-connection of the working
channel at the receive end. Hence, the cross-connection of the protection channel
that corresponds to the west line board is available, and the services are operating
in the protection channel.

 When the working channel is restored, because the protection is revertive, the
service signals are switched back the cross-connection that corresponds to the
originally specified line board.
 The protection switching takes the common ODUk ring protection as an example.
As shown in the figure, project K is a ring network consisting of six sites: A, B, C, D,
E, and F, each station has one service with the adjacent station. station A is set as
the management node to transmit three services (west working, west protection,
and east protection). The following takes the ODU1 service between station A and
station C, and between station E and station F as an example.
 In normal cases, the services of A<->C are as follows:
 The working route of A and C is the west (A–B–C), and the ODU1 of the west
line board of station A is the working channel. The protection route between
A and C is the east (A–F–E–D–C), occupying the ODU1 of the east line board
at station A as the protection channel.
 The working route of NE C and NE A is the east (C–B–A), and the ODU1 of the
east line board at station C is the working channel. The protection route
between C and A is the west (C–D–E–F–A). The ODU1 of the west line board
at station C is the protection channel.
 In normal cases, E<->F services are as follows:
 The working route of E and F is the west (E–F), and the ODU1 of the west line
board of station E is the working channel. The protection route between E
and F is the east (E–D–C–B–A–F), occupying the ODU1 of the east line board
at station E as the protection channel.
 The working route of F and E is the east (F–E), and the ODU1 of the east line
board at station F is the working channel. The protection route between F
and E is the west (F–A–B–C–D–E), occupying the ODU1 of the west line board
at station F as the protection channel.
 When station A detects that its west route is faulty,

 A<->C services need to be switched to the protection channel. The channel


usage between station A and station C is as follows: The service from station
A to station C is switched to the east protection route (A–F–E–D–C),
occupying the protection channel ODU1. The services from station C to
station A are switched to the west protection route (C–D–E–F–A), occupying
the protection channel ODU1. In this case, station A is the west local end,
station C is the west remote end, and stations F, E, and D are in the pass-
through state.

 Bidirectional services between site E and site F are not affected and are still
transmitted on the original working route.

 After the west route of station A recovers, the ODUk SPRing protection
performs the same operations as the preceding process. Then, the ODUk
SPRing protection switching is performed again and the channel is restored
to the normal state.
 When station F detects that its east route is faulty,

 E<->F services need to be switched to the protection channel. The channel


usage between site E and site F is as follows: The service from station E to
station F is switched to the east protection route (E–D–C–B–A–F), occupying
the protection channel ODU1. The services from station F to station E are
switched to the west protection route (F–A–B–C–D–E), occupying the
protection channel ODU1. In this case, station E is the west end, and station F
is the west end. Stations A, B, C, and D are in the pass-through state.

 The bidirectional services between station A and station C are not affected
and are still transmitted on the original working route.

 After the west route of station F recovers, the ODUk SPRing protection
performs the same operations as the preceding process. Then, the ODUk
SPRing protection switching is performed again and the channel is restored
to the normal state.

 Services on the working channel can be switched to the protection channel


regardless of whether any route on the ring network is faulty. Therefore, ODUk
SPRing protection has the following features: The protection channel is shared by
the working channels of each section on the ring.
 In the ODUK SP, an NE may be in one of the following four states:

 I: The entire network is in idle state, and no optical alarms or external


commands trigger the protection switching. Normally, the ODUK SP ring is in
idle state.

 S: If switching conditions such as broken fibers exist in the ODUK SP ring, the
states of the NEs at both sides of the protection ring change to the switch
state, that is, the bridge connection switching is performed. The NEs switch
services from the working channel to the protection channel by using the
cross-connect units.

 WTR: When the switching conditions no long exist in the protection ring, for
example after fiber restoration, the states of the NEs at both sides of the
protection ring change to the WTR state, and the current service conditions
are the same as the those in the switch state. If no switching conditions occur
during the whole WTR duration, the entire network enters into idle state. The
use of WTR duration can avoid frequent protection switching caused by
unstable lines. The WTR duration ranges from 5 minutes to 12 minutes, and
the default WTR duration is 10 minutes.

 P: When the site does not process services, services are directly passed
through.
 Trigger Condition:

 The board is fault, for example, the line board is powered off or offline.

 Signal fail (SF): R_LOS、R_LOC、HARD_BAD、OTU2_LOF、OTU2_LOM、


OTU2_AIS、OTU2_TIM、OTU3_LOF、OTU3_LOM、OTU3_AIS、OTU3_TIM、
ODU2_PM_AIS、ODU2_PM_LCK、ODU2_PM_OCI、ODU2_PM_TIM、
ODU3_PM_AIS、ODU3_PM_LCK、ODU3_PM_OCI、ODU3_PM_TIM、
ODU0_LOFLOM、ODU1_LOFLOM、ODU2_LOFLOM、ODUk_TCM6_AIS、
ODUk_TCM6_OCI、ODUk_TCM6_LCK、ODUk_TCM6_TIM.

 Signal degrade: ODUk_TCM6_DEG、ODUk_TCM6_EXC.


 The SDH/PDH Analyzer are mainly used to test the indicators such as the jitter and
bit error characteristics of the OTU board. In addition, they can be used as test
signal sources to provide test signals for the OTU board. The common SDH
analyzers include ANT-20SE, HP37718A, MP1570A, and ANT-10G.

 The spectrum analyzer is used to test the spectral characteristics of the WDM
system, such as the central wavelength, SMSR, and OSNR. The common models
are MS9710C and HP86145B.

 The optical power meter is mainly used to test the signal optical power. The power
meter used in the WDM system must be a large-range power meter, such as OLP-
18B, which is the most commonly used meter in the index test.

 In addition, some tools, such as fixed optical attenuators, fiber jumpers, fiber
adapters, and fiber cleaning tools, must be prepared during the test.

 The fixed optical attenuator can be classified according to the attenuation and
interface. The NG WDM system adopts the direct insertion fixed attenuator. The
attenuation can be 15, 10, 7, 5, 3 or 1 dB.

 Pigtails are classified based on the length and interface. In the NG WDM system,
the LC interfaces are mainly used. Therefore, some pigtails with different lengths,
such as LC/LC, LC/FC, and FC/FC, need to be prepared.

 The FC port is mainly used to connect the signal to the optical port of the
instrument or the related port of the ODF.

 The flange requires several LC/LC, SC/FC, and SC/SC connectors.


 NG WDM uses the L0+L1+L2 three-layer architecture. The L0 optical layer
supports wavelength multiplexing/demultiplexing and DWDM optical signal
adding/dropping. The L1 electrical layer supports cross-connection of ODUk/VC
services. The L2 layer implements Ethernet/MPLS-TP switching.

 Through the backplane bus, the system control board controls other boards. It
provides functions such as inter-board communication, service grooming between
boards, and power supply. The backplane bus includes: Control and
communication bus, electrical cross-connect bus, clock bus, etc.

 The functions of the modules in the figure are as follows:

 Optical-layer boards are used to process optical-layer services and


implements optical-layer grooming based on the λ level.

 Electrical-layer boards are used to process electrical-layer signals and


perform optical-to-electrical conversion for signals. Grooming granularities at
different levels can flexibly schedule electrical-layer signals through the
centralized cross-connect unit.

 The system control and communication board is the control center of the
equipment. It works with the network management system to manage the
boards of the equipment and realize the communication between the
equipment.

 The auxiliary interface unit provides input and output ports for clock/time
signals, alarm output and cascading ports, and alarm input/output ports.
 The OTU unit specifications described in this course mainly refer to the
specifications of the optical interfaces on the WDM side.
 The test meter sends the test signal light that matches the rate of the tested board.
The test signal is connected to the Rx optical interface of the OTU through a fixed
optical attenuator. After the OTU completes wavelength conversion, the signal
light sent from the OUT port is connected to the power meter or the optical
spectrum analyzer, perform the test.

 The test method for the OTU board with the FEC function is the same as that for
the OTU board without the FEC function. The only difference is that the average
transmit optical power of the OTU board with the FEC function is higher than that
of the OTU board with the FEC function.
 Take note to test the central wavelength as the peak power decreases by 3dB,
rather than the wavelength at the peak point.

 In case the sender is a laser of Single Longitudinal Mode (SLM), central wavelength
means the wavelength of the main mode peak value.
 In the actual test, the center wavelength is defined as the average value of the -
3dB bandwidth of the optical signals sent by the WDM-side laser of the OTU.

 The instrument used for testing the center wavelength is usually a multi-
wavelength meter or an optical spectrum analyzer.

 The nominal central frequency and central wavelength of C-band even:

NCF NCW NCF NCW NCF NCW NCF NCW


196.00 1529.55 194.00 1545.32 195.00 1537.40 193.00 1553.33
195.90 1530.33 193.90 1546.12 194.90 1538.19 192.90 1554.13
195.80 1531.12 193.80 1546.92 194.80 1538.98 192.80 1554.94
195.70 1531.90 193.70 1547.72 194.70 1539.77 192.70 1555.75
195.60 1532.68 193.60 1548.51 194.60 1540.56 192.60 1556.55
195.50 1533.47 193.50 1549.32 194.50 1541.35 192.50 1557.36
195.40 1534.25 193.40 1550.12 194.40 1542.14 192.40 1558.17
195.30 1535.04 193.30 1550.92 194.30 1542.94 192.30 1558.98
195.20 1535.82 193.20 1551.72 194.20 1543.73 192.20 1559.79
195.10 1536.61 193.10 1552.52 194.10 1544.53 192.10 1560.61

 NCF: nominal central frequency

 NCW: nominal central wavelength


 It is used to measure the spectral width of the optical pulse sent by the laser.
Because the channel spacing of the WDM system is very small, and the optical
pulse spectrum width of the laser is too large, crosstalk between different optical
channels is easily generated. Therefore, the -20dB spectral width must not be
greater than the standard.

 For the OTU boards of different rates, the requirements for different light source
types are different.
 The test method is the same as that of the central wavelength test.

 The -20dB spectral width is smaller , the dispersion tolerance value is bigger.
 In the definition, the value of SMSR is the ratio of the power. In the test, the unit of
the optical power is dBm. Therefore, the value of SMSR is the difference between
the peak optical power of the main longitudinal mode and the peak optical power
of the most prominent side mode.
 Here we emphasize the specifications of the WDM-side optical interfaces.

 The receiver sensitivity is defined as the minimum acceptable value of the average
receive power required for reaching the BER of 1×10-12 at the receive point R.

 The receive overload point is defined as the maximum acceptable value of the
average receive power required by the 1×10-12 BER at the receive point R.

 Currently, there are two types of receiving modules: Pin and APD. Generally, the
PIN receiving range is -18~0 dBm. The receive range of the APD is -27~-9 dBm,
and the values of the specific OTU boards may be different.

 During the test, connect the optical power according to the following figure.
Gradually increase and decrease the attenuation of the VOA to adjust the power.
When the tester detects that the BER ranges from 10 to 12, record the minimum
value of the optical power meter as the receiver sensitivity, the maximum value is
the overload point.
 The multiplexer and demultiplexer are passive optical components. Therefore, the
optical signals have a certain loss after passing through the multiplexer or
demultiplexer. We need to understand the loss of each channel, that is, the
insertion loss of the channel.

 If the optical components of the multiplexer and demultiplexer are different, the
requirements for insertion loss of each channel are different.
 The test methods for the multiplexer and demultiplexer are the same. Connect the
tunable light source to the receive optical port of the tested board. Use an optical
power meter to test the input optical power of the current tunable optical source
and the output port optical power of the corresponding wavelength, the difference
between the measured input value and the output value is the loss of the
wavelength channel, the unit is in dB.

 If there is no tunable light source on site, you can use the OTU board instead of
the tunable light source to test the insertion loss of a specific wavelength channel.

 How to test the insertion loss of the FIU board?


 Isolation is just for de-multiplexer.

 According to the definition of isolation, the above figure shows the connection of
the isolation test. The tunable light source sends the optical signals of a specific
wavelength to the IN port of the demultiplexer. The signals passing through the
demultiplexer are sent out from the specific output port, in this case, the actual
output optical power of the current output port may be obtained by using a meter.
Then, when the accessed tunable light source remains unchanged, the current
power values of the two adjacent ports of the output port are tested to obtain the
power of the input optical signal of the specific wavelength falling into the
adjacent channel, the isolation of the adjacent channel is the logarithmic
difference between the actual output power of the current specific wavelength and
the power of the adjacent channel. Similarly, the isolation of non-adjacent
channels can be obtained.
 During the test, adjust the VOA before the amplifier to ensure that the input
optical power of the OA board is within the normal range. Otherwise, the amplifier
may be overloaded or even burnt.

 During the test, use the optical spectrum analyzer to test the input and output
optical power of the optical amplifier. The difference between the output power
and the input power is the gain. The gain varies according to the optical amplifier
type. For example, the gain of the OAU, OBU, and OPU is different.

 The flatness of the gain must be less than 6 dB.


 The noise of the amplifier is mainly measured by the noise figure.

 The noise figure of the optical amplifier board refers to the degraded value of the
output optical signal-to-noise ratio caused by the spontaneous emission (ASE) of
the EDFA amplifier after the optical signal is amplified by the EDFA amplifier. That
is, a ratio of the signal-to-noise ratio of the input optical signal of the optical
amplifier to the signal-to-noise ratio of the output optical signal. We usually use
NF to denote the noise figure.

 The requirement for the noise figure is that the PA is less than 5.5dB and the BA is
less than 6 dB.
 Reference answer:

1. C
 In the NG WDM system, the central wavelength of the supervisory channel is
preferably 1510nm, and the normal wavelength range is 1510±10nm.

 The average transmit optical power of the NG WDM system is within the range of -
4~0dBm. The receive range of the receive module is -48~-3dBm, that is, the
receiver sensitivity is -48dBm, and the overload point is -3dBm. Generally, the
receive optical power of the OSC is about -30dBm.
 We introduce the signals in two directions: Signal sending and receiving.

 Transmit direction: The output optical power of each channel needs to be tested,
but the test points are MPI-S and S '.

 The MPI-S point refers to the output port of the multiplexer and optical
amplifier at the transmit site. S 'indicates the OUT port of the optical
amplifier at the intermediate OA station.

 In addition, the maximum output optical power at points MPI-S and S 'must
not be greater than 20 dBm. In other words, the optical power of combined
signals cannot exceed 20 dBm when the system is fully configured.

 Receive direction: We need to test the total optical power of the combined signals.
The test points are MPI-R points and R 'points.

 The MPI-R point refers to the input port of the optical amplifier and
demultiplexer at the receive site. R 'indicates the IN port of the optical
amplifier at the intermediate OA site.

 In addition to the optical power of combined signals, we also need to test the
input optical power, OSNR, and maximum path difference of each channel.
The test points are MPI-R and R '.

 The SNR of each channel must be greater than 20 dB, preferably greater than
22dB. The maximum path difference should be less than 3 dB.
 Another important test in the system test is the 24-hour loopback performance
test without bit errors. The test is performed on each channel. Multiple channels
can be used for the test.

 Pay attention to the following points during the test:

 The test lasts for 24 hours. Generally, the test is performed after the system
commissioning is complete. In this case, the optical power of each channel
should be the optimal value.

 The number of cascaded channels cannot exceed 16, and the types of
services carried on the channels must be the same.

 Adjust the optical power range of the meter and the board. During the self-
loop of the time port, you need to reinforce the fixed optical attenuator to
prevent the optical power from being overloaded.
 Reference answer:

1. 16
 Laser transceivers are used in optical transmission systems and associated test
tools. A bare optical fiber can produce a laser beam, which has high power density
but is invisible. Eyes will be injured when a beam of light enters eyes.

 Generally, looking at an un-terminated optical fiber or a damaged optical fiber


without eye protection at a distance of greater than 150 mm does not cause eye
injury. However, eyes may be hurt if an optical aid such as a microscope,
magnifying glass, or eye loupe is used to look at an un-terminated optical fiber
even though the distance is greater than 150 mm.

 The Raman amplifier board launches high optical power. Before you operate or
maintain the board, turn off the laser for safety.
 The output optical power of OA boards (including EDFA and Raman boards) is
high. To avoid equipment damage, note the following:

 During the removal and insertion of optical fibers, optical connectors must be
clean. Ensure that fiber end faces and board optical ports are free of
contamination before removing and inserting optical fibers. If a connector is
contaminated, the optical fiber will be easily damaged.
 When you are following ESD procedures, take the following precautions:

 Check the validity and functionality of the wrist strap.

 Do not touch a board with your clothing.

 Wear an ESD wrist strap and place the board on an ESD pad when you
replace boards or chips.

 Keep the boards and other ESD-sensitive parts you are installing in ESD bags.

 Wear an ESD wrist strap when operating the ports of boards because they
are also ESD-sensitive.

 Keep packing materials (such as, ESD boxes and bags) available in the
equipment room for packing boards in the future.
 Notice: When installing a board, use proper force to prevent the pins on the
backplane from being leaned.
 Notice:

 When a loopback test is performed at an optical port using a fiber jumper,


the optical attenuation must be increased to avoid damage to the equipment
due to the extremely high optical power of the laser. For a board which
caters for an attenuator to be added, the attenuator must be added to the
receive optical port.

 Laser is dangerous. The light is not visible to the eyes with or without laser
protective glasses. Do not look into optical connectors or ports. Failure to
follow this warning can cause damage to the eyes, or even blindness.

 Raman amplifier emits strong light. Do not insert or remove the fiber
connector when the laser is working, to avoid damage to human body.
 Proper temperature and humidity should be maintained inside the equipment
room for the transmission equipment to work well constantly, as shown in the
Table.

 Too high or low temperature or humidity will harm the transmission quality and
service life of the equipment:

 Too high relative humidity in the equipment room for a long period will
greatly jeopardize the equipment. It will cause poor insulation or even
electrical leakage in certain insulation materials.

 When the relative humidity is too low, the captive screws will become loose
as the insulation washer may get dry and shrink. Meanwhile the static
electricity generated in the dry climate may damage the circuits of the
equipment.

 Too high indoor temperature will greatly reduce the reliability of the
equipment. If the equipment runs in such a high temperature environment
for a long period, its life will be shortened because the over high temperature
will accelerate aging of the insulation materials.

 The temperature and humidity values are obtained 1.5 m above the floor and 0.4
m in front of the equipment.

 Short-term operation means the consecutive working time of the equipment does
not exceed 96 hours, and the accumulated working time every year does not
exceed 15 days.
 Procedure (Using the CLETOP cassette cleaner):
 Turn off the lasers before the inspection. Disconnect both ends of the fiber to
be inspected.
 Use a power meter to measure and ensure that there is no laser light on the
optical connector.
 Press down and hold the lever of the cassette cleaner, and the shutter slides
back and exposes a new cleaning area.
 Place the fiber tip lightly against the cleaning area so that the end face is flat
on the cleaning area.
 Drag the fiber tip lightly on one cleaning area in the direction of the arrow
once.
 Procedure (Using lens tissue):
 Turn off the lasers before the inspection. Disconnect both ends of the fiber to
be inspected.
 Use a power meter to measure and ensure that there is no laser light on the
optical connector.
 Place a small amount of cleaning solvent on the lens tissue.
 Clean the fiber tip on the lens tissue.
 Use the compressed air to blow off the fiber tip.
 Use a fiberscope to inspect the adapter to check if there is any dirt.
 Do not touch the fiber connector after you clean it. Connect it to the optical
port board at once. If it is not used for the time being, put a protective cap
on it.
 Procedure:

 Turn off the lasers before the inspection. Disconnect both ends of the fiber to
be inspected.

 Test the optical power using a power meter. Ensure that the laser is turned
off.

 Select the cleaning stick with a proper diameter for a certain type of the
adapter.

 Place a small amount of cleaning solvent on the optical cleaning stick.

 Place the optical cleaning stick lightly on the optical adapters so that cleaning
solvent is against the fiber tip. Hold the stick straight out from the adapter
and turn the stick clockwise one circuit. Make sure that there is direct contact
between the stick tip and fiber tip.

 Use the compressed air to blow off the fiber tip.

 Use a fiberscope to inspect the adapter to check if there is any dirt.

 Connect the fiber to the board, or put a protective cap on the port.

 Turn on the lasers after you connect the fiber to the board.
 Procedure:

 Draw out the air filter.

 Clean the filter with water. Then wipe it with a cloth, and dry it with an air
blower.
 Warm reset of the SCC board:

 After a warm reset on the SCC board, FPGA on the board is not updated, and
the configuration data in the memory of the board remains the same.

 A warm reset can be performed in any of the following methods:

 By pressing the RESET button on the SCC board.

 Cold reset of the SCC board:

 When the SCC board is faulty, perform a warm reset of the SCC board. If the
fault persists after a warm reset, perform a cold reset.

 The methods of performing a cold reset of the SCC board are as follows:

 Perform a cold reset by removing and then inserting the SCC board.

 Cold reset of the other boards:

 When the board is faulty, perform a warm reset of the board. If the fault
persists after a warm reset, perform a cold reset.

 A cold reset can be performed as follows :

 By removing and then inserting the board.


 Don’t forget to add fix attenuator before Rx port or IN port to prevent overload
damage laser. Usually, the 10dB attenuator is used for hardware loopback.
 MON interface: online monitoring interface

 It is very useful and helpful to do maintenance or troubleshooting. A small


amount of optical signal can be output to the spectrum analyzer or spectrum
analyzer unit through the interface so as to monitor the spectrum and optical
performance of the multi-channel signal without interrupting the services.
 The MON port is useful for maintaining and troubleshooting the services on the
primary channel.
 The mechanical optical attenuator is of high sensitivity. You should rotate the
adjusting lever with proper speed and exertion. Otherwise, the optical power will
change abruptly, which may affect service and damage the attenuator.

 It is strongly recommended to practice before maintenance.

 Please confirm the attenuator, which you will adjust, is correct.


 To replace a board, make sure that the board to be inserted and the board to be
removed are the same type and characteristics.
 Follow the steps below to remove the board:

 Loosen the captive screws on the front panel of the board, if there are screws
on the front panel.

 Grip the ejector levers and apply an outward force until the ejector levers
become horizontal and the connector of the board leave the backplane.

 Apply force gently outward to pull out the board completely.

 Follow the steps below to insert a new board:

 Open the ejector levers of the board using two hands and align the board
with the guide rail of the slot.

 Slide the board along the guide rail into the slot slowly until it cannot move
forward.

 Close the ejector levers to hold the beam of the subrack.

 Tighten the two captive screws on the panel of the board.


 Perform an active/standby switching on the NMS.

 Double click the ONE icon on the Physical Map and the NE Panel tab is
displayed.

 Right-click the NE icon and select NE Explorer.

 Choose Configuration > Board 1+1 Protection.

 In Board 1+1 Protection, select Cross-Connect Board 1+1 Protection. Select


Working/Protection Switching from the short-cut menu. Click OK on the
window displayed.

 Click Query. If the Active Board is the standby board, the switching succeeds.
 Follow the steps below to cancel the switching.

 Choose Configuration > Board 1+1 Protection in the Function Tree.

 In Board 1+1 Protection, select Cross-Connect Board 1+1 Protection. Select


Restore Working/Protection from the short-cut menu and click OK on the
window displayed.
 The replacement of EFI1

 The ID of the master subrack is 0 by default.

 ID1-ID4 correspond to bits 1-4 of SW2, and ID5-ID8 correspond to bits 1-4 of
SW1.

 Among these ID values, only ID1-ID6 are valid, and ID7 and ID8 are reserved.
The bits from high to low are ID6-ID1, by which a maximum of 64 states can
be set.

 When the DIP switch is toggle to ON, the value of the corresponding bit is
set to 0.
 When the SCC has 1+1 protection and when the switching is normal, replacement
of the SCC does not break the communication between the NE and the T2000.

 When implement the changing SCC on site , need the second line engineer to
work together.

 When SCC is 1+1 protection, all of the data in the SCC will synchronize between
the working and standby SCC, including ID , IP , gateway etc.

 Normally just the step1, step6, step8 need on-site operation. The others are done
by U2000 remotely.
 The battery on the SCC is to ensure that the configuration is kept upon a power
failure of the SCC.

 When the board is in use, place a jumper cap over the battery jumper to
make a short circuit, and thus the battery supplies power normally.

 When the board is not in use, use a jumper cap to disconnect the battery
jumper.
 Replacing the SCC without protection.

 When standby board is vacant, it is to realize 1+1 protection. The method is


the same as upon SCC 1+1 Protection. Just need to install one new SCC to
configure SCC 1+1 Protection then you can follow the methods and steps
before we discussed.

 When standby board is not vacant, SCC 1+1 protection cannot be configured.
Replacement of the SCC breaks the communication between the NE and the
U2000. Here we just consider this situation.

 If ASON is using, the node ID also need to be changed.

 Replace the board.

 If the CF card is needed to recovery the original database, ensure that the CF
card is correctly installed on a new SCC board before you insert the new SCC
board.

 Connect the Web LCT/U2000 to the NE.

 The new board is initiated at factory, hence, the default IP subnet segment of
the one is 129.9.X.X.
 Choose Fault > Browse Current Alarm from the Main Menu.

 In the Filter dialog box, set filter criteria and click OK. The current alarm viewing
window is displayed.
 Choose Fault > Browse History Alarm from the Main Menu.

 In the Filter dialog box, set filter criteria and click OK. The history alarm viewing
window is displayed.
 Choose Performance > Browse WDM Performance from the Main Menu. Click the
Current Performance Data tab.

 In the Object Tree, select one or more NEs or boards, and click.

 Complete the following information: Monitored Object Filter Condition, Monitor


Period and Performance Event Type.

 Optional: In the Display Options group box, check the Current Value or
Maximum/Minimum Value check box.

 Click Query to query the data from the NE.


 Choose Service >Tunnel>Tunnel Management from the main menu.

 In the Filter Condition dialog box, set filter criteria and click OK.

 In the Tunnel Management window, click Running Status to sort all tunnels by
running status.

 Ensure that the running status of each tunnel is Up. Otherwise, rectify the MPLS
tunnel fault.
 Choose Configuration > NE Batch Configuration > NE Time Synchronization from
the Main Menu.

 In the Object Tree, select one or more NEs and click .

 Click Close in the Operation Result dialog box.

 Select one or more NEs in the list, right-click and choose Synchronize with NM
Time from the shortcut menu.

 Click Yes in the Time Synchronization Operation prompt box. Click Close in the
Operation Result dialog box.
 This slide describes how to periodically view the DCN communication status of NEs
to ensure DCN connectivity.

 Choose System > DCN Management from the main menu. The Filter NE
dialog box is displayed.

 In the Filter NE dialog box, set the filter criteria and click OK.

 On the NE tab of the DCN Management window, click Refresh.

 Ensure that the communication status of the NE is Normal. If the value of


Communication Status is Disconnected, rectify the DCN fault.
 On the U2000, choose Performance > Browse WDM Performance from the Main
Menu. Click the Current Performance Data option button.

 Select 15-Minute or 24-Hour for the Monitor Period field.

 In the left pane, select one or more NEs and boards, and click .

 Click the Gauge tab, and choose Working Temperature for the performance event
type. Choose Display Current Value in the Display Options pane.

 Click Query and then click Close in the Operation Result dialog box. Confirm that
the temperature is within the normal range.
 Choose Administration > NE Software Management > NE Data
Backup/Restoration from the main menu.

 In the NE View, select one or more NEs of the same type and click Backup.

 In the Backup to dialog box, select the location of the data to be backed up.

Items Description
NMS Server Back up NE data to the root directory of the U2000 server.
NMS Client Back up NE data to the corresponding path on the U2000 client.
 Click Start to back up the database.
 Choose Administration > NE Software Management > NE Backup Policy
Management from the main menu. In the NE Backup Policy Management window,
click New Policy.

 In the NE navigation tree, select NE Type and NE Version. The resource name, IP
address, and version of the selected device type are displayed in the NE list.

 Click Next. The Set Policy dialog box is displayed.

 Set the backup policy.

 Click OK to complete the automatic backup of the selected NE. If you need to set
the automatic backup of other NEs, select the corresponding NE in NE Backup
Policy Management and right-click to enable the backup policy.
 Choose Administration > Back Up/Restore NMS Data > Back Up Database from
the main menu.

 Set the backup path on the server and click Backup. The U2000 starts to back up
the database and displays a dialog box indicating the operation progress.

 The database files are backed up to the default directory and backed up in a folder
named by time:

 On Solaris and SUSE Linux, back up the file to NMS installation


path/server/var/backup.

 On the Windows platform, back up the file to the NMS installation path
\server\var\backup directory.
 Choose Service > Tunnel > Manage Protection Group from the main menu.

 Click Filter, and choose the protection group for the exercise switching.

 Right click, choose Switch > Exercise Switching in the protection group.

 The Operation Result dialog box is displayed. Click Close.

 Right click, choose Query Switching Status. Then, check whether the service is
switched normally.

 Optional: If the protection group is configured as revertive, the service is switched


from the protection tunnel to the working tunnel after the working tunnel is
restored and the WTR time expires. After the WTR time expires, right click, choose
Query Switching Status and check whether Active Tunnel is Working Tunnel.

 Optional: If the protection group is configured as non-revertive, right click, choose


Switch > Clear. The Operation Result dialog box is displayed to indicate that the
operation is successful. Click Close.
 After the manual switching is performed by using the U2000, the service should be
switched from the working tunnel to the protection tunnel. After the manual
switching is released, the service should be switched from the protection tunnel to
the working tunnel.

 In the Main Topology, select the source NE of an MPLS tunnel. Right-click the NE
and choose NE Explorer from the shortcut menu.

 In the Function Tree, choose Configuration >Packet Configuration>Ethernet


Service Management>E-line Service.

 Select an Ethernet service, and choose the protection group.


 You can check the real-time statistics about an Ethernet port by browsing the
statistics group performance of the Ethernet port.

 In the NE Explorer, select the Ethernet board and choose Performance >
RMON Performance from the Function Tree.

 Click the Statistics Group tab.

 Select the object from the drop-down list of Object. Set the range of
performance to be browsed, Query Conditions, and Display Mode.

 Click Start. The information about the statistics group performance of the
Ethernet port is queried and then displayed.

 Optional: Click Print or Save As to output the Performance data.

 Note: During the routine maintenance, among the numerous performance events,
the following performance events should get our attention.

 Received bytes and transmit bytes ( byte/s )

 Received packets and transmit packets ( pkt/s )

 ETHCOL: Collisions detected, this could be aroused by the inconsistent


working mode at two ends.

 ETHCRCALI: FCS and alignment error packets, this could be aroused by the
hardware fault of the Ethernet cable, Ethernet port, or even the hardware
fault from the remote end.
 TXBBAD: Total number of bytes in the transmitted bad packets, this could
be aroused by the hardware fault.
 The history group performance includes the statistics about the Ethernet
performance of a certain period in the past. You can learn about the Ethernet
performance data of an Ethernet port by browsing the history group performance.

 Select the corresponding Ethernet board in the NE Explorer. Choose


Performance > RMON Performance from the Function Tree.

 Click the History Group tab.

 Select the object from the drop-down list of Object. Set the range of the
performance to be browsed, Ended From, To, History Table Type, and Display
Mode.

 Click Query, and the information about the history group performance of the
Ethernet port is displayed.
 You can determine the method for obtaining and saving the history data by
configuring an RMON history control group.

 Select the NE in the NE Explorer. Choose REMON Performance > RMON


Setting from the Function Tree.

 Set parameters that are related to 30-Second, 30-Minute, Custom Period 1,


and Custom Period 2.

 Click Apply. A dialog box is displayed, indicating that the operation is


successful.

 Click Close.
 The equipment supports the RMON performance monitoring for the physical port
and the service object. The Ethernet service performance is also called VUNI
performance which is used to monitor one specific piece of Ethernet service.

 In the NE Explorer, select Configuration >Packet Configuration>Ethernet


Service Management.

 Choose the Ethernet service that needs to be monitored, right click and select
Browse Performance from the drop-down list.
 Set the items of performance to be monitored, Query Conditions, and Display
Mode.

 Click Start. The information about the statistics group performance of the Ethernet
service is queried and then displayed.

 Optional: Click Print or Save As to output the Performance data.

 In this Performance Management, you can check the real time performance,
history records through the Statistic Group and History Group tabs.

 The specific monitor items can be selected through the RMON Setting tab
depends on the actual requirement.

 Note: The operation of History Group and RMON Setting is similar to that of
Ethernet port RMON performance. During the routine maintenance, the following
performance events should get our attention.

 Received bytes and transmit bytes (byte/s).

 Received packets and transmit packets (pkt/s).


 Through the RMON Performance of MPLS Tunnel operation, you can check the
real time performance, history records of the MPLS Tunnel.

 In the NE Explorer, select MPLS Management > Unicast Tunnel Management


> Static Tunnel.

 Choose the MPLS Tunnel that needs to be monitored, right click and select
Browse Performance from the drop-down list.
 Set the items of performance to be monitored, Query Conditions, and Display
Mode.

 Click Start. The information about the statistics group performance of the MPLS
Tunnel is real-timely monitored and the result is displayed at the window below.

 Optional: Click Print or Save As to output the Performance data.

 In this Performance Management, you can check the real time performance,
history records through the Statistic Group and History Group tabs.

 The specific monitor items can be selected through the RMON Setting tab
depends on the actual requirement.

 Note: The operation of History Group and RMON Setting is similar to that of
Ethernet port RMON performance. During the routine maintenance, the following
performance events should get our attention.

 Received bytes and transmit bytes ( byte/s )

 Received packets and transmit packets ( pkt/s )


 Through the RMON Performance of PW operation, you can check the real time
performance, history records of the PW.

 In the NE Explorer, select MPLS Management > PW Management

 Choose the PW that needs to be monitored, right click and select Browse
Performance from the drop-down list.
 Set the items of performance to be monitored, Query Conditions, and Display
Mode.

 Click Start. The information about the statistics group performance of the PW is
real-timely monitored and the result is displayed at the window below.

 Optional: Click Print or Save As to output the Performance data.

 In this Performance Management, you can check the real time performance,
history records through the Statistic Group and History Group tabs.

 The specific monitor items can be selected through the RMON Setting tab
depends on the actual requirement.

 Note: The operation of History Group and RMON Setting is similar to that of
Ethernet port RMON performance. During the routine maintenance, the following
performance events should get our attention.

 Received bytes and transmit bytes (byte/s).

 Received packets and transmit packets (pkt/s).


 For normal services, the WDM-side and client-side lasers must be enabled.

 Right-click the NE icon in the Main Topology of the U2000 and choose NE Explorer
from the shortcut menu. Select a board and choose Configuration > WDM
Interface from the Function Tree.

 Click Query in the lower right corner to view the laser status. Normally, the laser
status is enabled.
 When handling a fault, you can set Loopback as required. There are three types of
loopback: Inloop, Outloop, and non-loopback.

 Right-click the NE icon in the Main Topology of the U2000 and choose NE Explorer
from the shortcut menu. Select a board and choose Configuration > WDM
Interface from the Function Tree.

 Click Query in the lower right corner to check the current loopback status. In
normal cases, the loopback status is Non-Loopback. You can perform the loopback
operation as required.
 Answer 1: D

 Answer 2: BCD
 OTN network layers are divided into the following two layers: Optical layer and
electrical layer. The electrical layer consists of the OPUk, ODUkP, ODUkT, and
OTUk layers. The optical layer consists of the OChr (OCh), OMSn, OTSn (OPSn),
and OSC.

 Full functionality: OCh contains complete optical-layer overheads (OOS).

 Reduced functionality: OChr has no OOS.


 Frame alignment overhead
 FAS: Frame alignment signal
 MFAS: Multiframe alignment signal
 OTUk overhead
 SM: Section monitoring
 GCC0: General communication channel 0
 RES: Reserved
 ODUk overhead
 TCMACT: TCM activation/deactivation coordination protocol
 TCMi: Tandem connection monitoring
 FTFL: Fault type and fault location reporting
 PM: Path monitoring
 EXP: Experimental channel
 GCC1/2: General communication channel 1/2
 APS/PCC: Automatic protection switching channel/Protection
communication channel
 OPUk overhead
 PSI: Payload structure identifier
 JC: Justification control bytes
 NJO: Negative justification opportunity bytes
 The source and sink of the OCh client trail are client-side ports of the OTU. For
example, the LSX board corresponds to the path of a 10GE service on the client
side.

 The source and sink of the OCh trail are WDM-side ports of the OTU.

 The source and sink of an OMS trail is the start port of multiplexed signals,
which corresponds to the path of a multiplexed signal.

 An OTS trail is a physical fiber connection between two adjacent OM/OD/OA


boards on the main optical path.

 OTS, OMS, and OCh trails are optical-layer trails. OTUk, ODUk, and Client trails
are electrical-layer trails.
 Client trail:Optical Channel (OCh) client trail.

 ODUk trail: Optical channel Data Unit-k trail.

 OTUk trail: Optical channel Transport Unit-k trail.

 OCh trail: Optical channel trail.


 The OPUk performs client signal mapping, frequency adjustment, and rate
adaptation in the transmit direction.

 The ODUk layer writes ODUk overheads:

 Write TTL, BDI, STAT, and BIP-8 to PM at the ODUkP layer.

 Write GCC1/2.

 If the TCM is enable, you need to write TTL, BIAE, BDI, STAT, and BIP-8 to
TCMi.

 Generate ODUk frame structure.

 The source function of OTUk layer:

 Write the overhead byte to OTUk layer.

 Caculate BIP-8.

 Write SM, GCC, FAS, MFAS.

 Generate the complete OTUk frame structure.


 The sink function of electrical-layer overheads is used to monitor the overhead
of the receive end, report the defect information, insert or insert back signals,
and suppress alarms and report performance events.
 Monitor the overheads of the TCM sublayer and report the defect information:

 If a TTI mismatch occurs, dTIM is reported.

 Monitor the BIP-8. If the bit error rate exceeds the threshold, the dDEG is
reported.

 Monitor the BIAE byte. If the value is 1011, the dBIAE alarm is reported.

 Monitor the STAT byte. If the value is 000, the dLTC is reported. If the
value is 111, the dAIS is reported. If the value is 110, the dOCI is reported.
If the value is 101, the dLCK is reported. If the value is 010, the dIAE is
reported.

 Monitor BDI bytes. If the value is 1, dBDI is reported.

 Bit error reporting: Reports bit error performance events at the remote end
and local end according to the monitoring of BIP-8 and BEI.
 Monitor the overhead and report the defective information:

 Compare the expected TTI with the received TTI and report the dTIM.

 Monitor the BIP-8. If the bit error rate exceeds the threshold, the dDEG
alarm is reported.

 Monitor the STAT byte. If the value is 111, dAIS is reported. If the value is
110, dOCI is reported. If the value is 101, dLCK is reported.

 Monitor BDI bytes. If the value is 1, dBDI is reported.

 Insert or insert back signal based on the defective information:

 According to the signal failure conditions transmitted from the service


layer and the dTIM, dAIS, dOCI, and dLCK of the current layer, the
corresponding source function is notified to insert the BDI.

 Notify the corresponding source function to insert back BEI according to


the BIP-8.

 According to the signal failure conditions transmitted from the service


layer and the dTIM, dAIS, dOCI, and dLCK of the current layer, the path
signal failure information is generated and transmitted to the next
function

 It is transmitted to the subsequent functions according to the DEG


situation.
 OTUk_LOF: The OTUk_LOF is an alarm indicating that the frame alignment
signal (FAS) is abnormal. This alarm occurs when the frame alignment
processing is out of frame (OOF) in three consecutive milliseconds.

 OTUk_AIS: OTUk alarm indication. An AIS signal travels downstream, which


indicates that a signal failure is detected in the upstream direction.

 OTUk_LOM: The OTUk_LOM is an alarm indicating that the multiframe


alignment signal (MFAS) is abnormal. This alarm occurs when the multiframe
locating is out of multiframe (OOM) in three consecutive milliseconds.

 OTUk_TIM: OTUk trail trace identifier (TTI) mismatch. This alarm is generated
during the control process when the TTI at the peer end mismatches that at the
local end if the TIM detection is enabled.

 OTUk_DEG: OTUk_DEG is an alarm indicating that the OTUk signal is degraded.


When the BIP8 detection is in burst mode, this alarm is generated if the signal
degradation or bit error count exceeds the threshold. When the BIP8 detection
in poisson mode, this alarm is generated if the signal degradation exceeds the
threshold.

 OTUk_EXC: OTUk bit error count crossing the threshold. This alarm is generated
when the BIP8 detection is in poisson mode and bit error count exceeds the
threshold.
 For a WDM product, the detection and transmission of alarms vary according
to the type of the signals that are accessed into the OTU. The OTU is classified
into the following types:

 Non-convergent OTU:

 It refers to an OTU that converts one channel of client service signals.

 Convergent OTU:

 It refers to an OTU that converges and converts multiple channels of


client service signals.

 Regenerating OTU:

 It refers to an OTU that regenerates the corresponding service


signals at an intermediate station.

 According to the type of the OTU and the type of the signals accessed by the
OTU, there are 7 situations defined.

 According to the type of the accessed service, this course focuses on the alarm
detection and transmission mechanism of the OTU board for processing SDH
standard signals and OTN standard signals.
 This section describes the alarm signal flow by analyzing how the OTU
processes the R_LOS alarm and PM BIP8 errors. The alarm signal flows of other
alarms are similar.

 R_LOS

 The client side of the OTU at station A detects the R_LOS alarm. The
R_LOS signals are processed on the WDM side of the OTU and then
are sent to station B. The client side of the OTU at station B detects
the REM_SF alarm. The alarm is then sent to the downstream client
device of station B, and the OTU reports the R_LOF alarm to the
client device.

 PM BIP8 errors

 The OTU at station B detects PM BIP 8 errors on the WDM side.


When the PM BIP 8 errors exceed the threshold, the ODUk_PM_DEG
or ODUk_PM_EXC alarm is generated. The number of errors
determines which alarm is generated. In addition, the performance
events indicting ODUk PM remote bit errors are sent to the WDM
side of upstream station A. The bit errors are then sent to the client
device (The bit errors cannot be sent to the downstream station
except that the PM BIP 8 errors are from the data inside). The alarms
related to bit errors are detected in the client device.
 This section describes the alarm signal flow by analyzing how the OTU
processes the OTUk_LOF alarm. The alarm signal flows of other alarms are
similar.

 The WDM side of the OTU at station B detects the OTUk_LOF alarm. Then,
the OTU sends the ODUk_PM_BDI and OTUk_BDI alarms to the WDM side
of upstream station A. In addition, the alarm is then sent to the client side
of station B. After the alarm is processed on the client side, the R_LOF
alarm is detected in the client device.
 This section describes the alarm signal flow by analyzing how the OTU unit
processes the R_LOS and OTUk_LOF alarms.
 R_LOS
 The client side of the OTU at station A receives R_LOS signals. The R_LOS
signals are processed on the WDM side of the OTU and then are sent to
station B. The WDM side of the OTU at station B detects the
ODUk_PM_AIS alarm, and then an SF event is generated. The event
triggers a protection switching. The alarm is then sent to the downstream
client equipment of station B, and the OTU reports the ODUk_PM_AIS
alarm to the client equipment.
 OTUk_LOF
 The OTUk_LOF alarm is detected on the WDM side of the OTU board at
station B, and station B sends the OTUk_BDI alarm to the WDM side of
the OTU at the upstream station A. At the same time, the alarm is then
sent to the downstream station of station B, where it is processed on the
client side of the OTU. In this case, the ODUk_PM_AIS alarm is detected in
the client equipment. An SF event is generated on the WDM side of the
OTU at station B, and a service channel protection switching is triggered.
 The client side of the OTU at station A receives OTUk_LOF signals. The
OTU sends the OTUk_BDI alarm to the upstream client equipment of
station A. In addition, the LOF alarm is processed on the WDM side of the
OTU and then is sent to station B. The WDM side of the OTU at station B
detects the ODUk_PM_AIS alarm, and then an SF event is generated. The
event triggers a protection switching. The alarm is then sent to the
downstream client equipment of station B, and the OTU reports the
ODUk_PM_AIS alarm to the client equipment.
 OTUk_TIM
 After the OTU at station A receives the OTUk_TIM alarm on the client side,
it sends the OTUk_BDI alarm to the upstream station, but it does not send
the OTUk_TIM alarm to the downstream station. If the TIM is enabled in
the subsequent action, an SF event is generated and the WDM side of the
OTU at station B reports the ODUk_PM_AIS alarm. The ODUk_PM_AIS
alarm is sent to the downstream and the client device reports this alarm.
 After the OTU at station B receives the OTUk_TIM alarm on the WDM side,
this alarm is not sent to the downstream if the TIM is not enabled in the
subsequent action. If the TIM is enabled in the subsequent action, an SF
event is generated. After the client side of station B processes the event,
the client device reports the ODUk_PM_AIS alarm.
 ODUk_PM_TIM/ODUk_PM_BDI
 The client side of the OTU at station A receives ODUk_PM_AIS,
ODUk_PM_LCK, or ODUk_PM_OCI signals. The signals are not processed
and reported at the local station. After the signals are sent to station B,
the WDM side of the OTU at station B detects the ODUk_PM_AIS,
ODUk_PM_LCK, or ODUk_PM_OCI alarm. Then, an SF event is generated.
The event triggers a protection switching. The alarm is then sent to the
downstream client equipment of station B, and the OTU reports the
ODUk_PM_AIS, ODUk_PM_LCK, or ODUk_PM_OCI alarm to the client
equipment.
 An SF event is generated on the WDM side of the OTU at station B, and a
protection switching is triggered.
 This section describes the alarm signal flow through an example in which four
client-side services are accessed on the convergent OTU.
 Four channels of R_LOS signals are accessed on the client side.
 The OTU at station A accesses four channels of R_LOS signals on the
client side. After being processed in the middle part of the OTU at
station A, the alarm signals are then sent to station B. The REM_SF
alarm is generated on the client side of station B. The R_LOF alarm is
detected in the client equipment.
 One channel of R_LOS signals are accessed on the client side.
 The OTU at station A accesses one channel of R_LOS signals on the
client side, for example, channel 1 at optical port 3. After being
processed in the middle part and on the WDM side of the OTU at
station A and the WDM side of station B, the alarm signals are then
sent to the downstream station. The REM_SF alarm of channel 1 at
optical port 3 is generated on the client side of station B. The R_LOF
alarm is detected in the client equipment.
 Non-R_LOS signals are accessed on the client side.
 The signal flow of the R_LOF or the LOC is similar to that of the
R_LOS.
 When any other alarms are accessed, the same alarm is reported at each
detection point in the system.
 This section describes the alarm signal flow through an example in which four
client-side services are accessed on the convergent OTU.
 There is R_LOS, OTUk_LOF, OTUk_AIS, ODUk_PM_AIS, ODUk_PM_OCI, or
ODUk_PM_LCK on the WDM side.
 The WDM side of the OTU at station B accesses and processes the
alarm signals. The OTU sends the ODUk_PM_BDI and OTUk_BDI
alarm to the WDM side of upstream station A. In addition, the alarm
is then sent to the client side of station B. After the alarm is
processed on the client side, the R_LOF alarm is detected in the client
equipment.
 An SF event is generated on the WDM side of the OTU at station B,
and a protection switching is triggered.
 There are bit error alarms on the WDM side.
 The OTU at station B accesses and processes bit error alarm signals
on the WDM side, and then sends remote bit error performance
events to the WDM side of upstream station A. The bit error alarm is
then sent to the client side of the downstream station B, and the bit
error alarm is detected in the client equipment.
 An SD event is generated on the WDM side of the OTU at station B.
In this case, users can determine whether the SD event triggers a
service channel protection switching through proper configuration.
 Four channels of R_LOS signals are accessed on the client side.
 The OTU at station A accesses four channels of R_LOS signals on the
client side. After being processed in the middle part and on the WDM
side of the OTU at station A, the alarm signals are then sent to station B.
The ODUk_PM_AIS alarm of the corresponding channel is generated in
the middle part of station B. The ODUk_PM_AIS alarm is detected in the
client device.
 An SF event is generated in each channel of the OTU at station B, and a
protection switching is triggered.
 One channel of R_LOS, OTUk_LOM or OTUk_LOF signals is accessed on the
client side.
 The OTU at station A accesses one channel of R_LOS, OTUk_LOM or
OTUk_LOF signals on the client side, for example, channel 1 at optical
port 3. After being processed in the middle part and on the WDM side of
the OTU at station A and the WDM side of station B, the alarm signals are
then sent to the downstream station. The ODUk_PM_AIS alarm of channel
3 at optical port 1 is generated in the middle part of station B. The
ODUk_PM_AIS alarm is detected in the client device.
 An SF event is generated in each channel of the OTU at station B, and a
protection switching is triggered.
 The alarm signals except for the signals of the R_LOS, OTUk_LOM, and
OTUk_LOF alarms are accessed on the client side.
 When any of other alarm signals is accessed, the corresponding alarm is
reported at each detection point in the system.
 This section describes the alarm signal flow through an example in which four
client-side services are accessed on the convergent OTU.

 There is R_LOS, OTUk_LOF, OTUk_AIS, ODUk_PM_AIS, ODUk_PM_OCI or


ODUk_PM_LCK on the WDM side.

 The WDM side of the OTU at station B accesses and processes the
alarm signals. The OTU sends the ODUk_PM_BDI or OTUk_BDI alarm
to the WDM side of upstream station A. In addition, the alarm is
then sent to the client side of station B. After the alarm is processed
on the client side, the ODUk_PM_AIS alarm is detected in the client
device.

 An SF event is generated on the WDM side of the OTU at station B,


and a protection switching is triggered.

 There are bit error alarms on the WDM side.

 The WDM side of the OTU at station B accesses and processes the
bit error alarm signals. The OTU sends the remote bit error
performance events to the WDM side of upstream station A. The
alarm is then sent to the client side of the downstream station B. The
error-dependent alarm is detected in the client device.

 An SD event is generated on the WDM side of the OTU at station B,


and a protection switching is triggered.
 In the case of the regenerating OTU, all alarms in the SM section are
terminated at the local station and are not sent to the downstream station
(except that the OTUk_LOF alarm is inserted with an ODUk_PM_AIS alarm to
the downstream station). Other alarms are then sent to the downstream station,
and are reported on the WDM side of the OTU (except that the R_LOS alarm is
inserted with an ODUk_PM_AIS alarm to the downstream station).
 OTU with the cross-connect function in the straight-through mode

 The four channels of optical signals accessed from RX1-RX4 on unit A at


the upstream station are sent to channels 3-6 that correspond to the OUT
port in the straight-through mode. One channel of optical signals that are
input from the IN port on unit B at the downstream station is
demultiplexed into four channels of optical signals, which are then
directly sent to TX1-TX4.

 In the straight-through mode, the REM_SF and REM_SD alarms at the


downstream station indicate that the signals at the corresponding port on
the client side at the upstream station fail or bit errors at this port exceed
the threshold. For example, when the services in channel 1 at optical port
3 on unit A at the upstream station fail, channel 1 at optical port 3 on unit
B at the downstream station reports the REM_SF alarm.
 R_LOS
 The client sides of the OTUs at station A and station B work in the non-
auto-negotiation mode. The R_LOS alarm signal is received on the client
side of the OTU at station A. The alarm signal is sent to station B after it is
processed on the WDM side of the OTU. In this case, the REM_SF alarm is
generated on the client side of the OTU at station B, and the client
equipment at station B reports the LINK_ERR alarm.
 The client sides of the OTUs at station A and station B work in the auto-
negotiation mode. The R_LOS alarm signal is received on the client side of
the OTU at station A. The alarm signal is sent to station B after it is
processed on the WDM side of the OTU. In this case, the REM_SF and
LINK_ERR alarms are generated on the client side of the OTU at station B,
and the client equipment at station B reports the LINK_ERR alarm.
 LINK_ERR
 The client sides of the OTUs at station A and station B work in the non-
auto-negotiation mode. The client signals at station A contain LINK_ERR
alarms, and the client signals are transmitted transparently from station A
to the WDM side of the OTU at station B.
 The client sides of the OTUs at station A and station B work in the auto-
negotiation mode. In the case of the Ethernet board that supports the LPT
function, when the LPT enabling status is set to Disable, the LINK_ERR
alarm is not generated on the client side of the OTU at station B; when
the LPT enabling status is set to Enable, the LINK_ERR alarm is generated
on the client side of the OTU at station B.
 Answer 1:

 The client-side port on the local OTU board reports the R_LOS alarm, and
the opposite OTU board reports the REM_SF alarm.

 Answer 2:

 The client side of the transmit-end TQS board reports RS_BBE bit error
performance events, and the receive-end TQS board reports bit error
performance events. The client-side equipment reports bit error
performance events.
 Maintenance signals at electrical-layer including:

 AIS, PMI, OCI, LCK, IAE, BIAE, BDI, and BEI.


 The NS2 board is a typical convergence OTU board.

 LCK signal frame structure:

 The LCK signal is transmitted downstream, indicating that the upstream


signal connection is locked and no signal pass.

 The ODUk-LCK insertion area is filled with 01010101. The insertion area is
the entire ODUk signal except FA OH and OTUk OH.
 OCI alarm:

 If no optical cross-connection is configured for the upstream service link


or no fiber connection is configured between the OTU and the
multiplexer board, the upstream service link inserts an OCI signal to the
downstream NE. After receiving the OCI signal, the downstream NE
reports this alarm.
 If SM TTI mismatch from A to B, station B reports ODU2_PM_TIM, station A
reports ODU2_PM_BDI.
 SAPI$DAPI (Source Access Point Identifiers & Destination Access Point
Identifiers):

 Checks whether the source flag and sink flag of TTI to be Received are
consistent with the source flag and sink flag of TTI Received respectively.

 ALS: Automatic Level Shutdown.

 IPA: Intelligent Power Adjustment.


 Functions of point-to-point ODU2 nodes:

ODU2P OTU2 OCh


Source Source Source
function function function
A,C节点功能
ODU2P OTU2 OCh
Sink Sink Sink
function functio functio
n n
OCh
OTU2 OTU2 OCh
Sink
Sink Source Source
functio
function functio function
n B节点功能
n
OCh
OCh OTU2 OTU2 Sink
Source Source Sink functio
function function function n
 Station A inserts the ODU2_LCK signal. At this time, station C reports the
ODU2_PM_LCK minor alarm and reports the ODU2_PM_BDI minor alarm to the
upstream. ‘a’ in ODU2_PM_aBDI indicates the action of reporting the alarm,
and the ODU2_PM_BDI alarm is reported at station A.
 When the fiber cut occurs in the transmit direction from station A to station B,
the WDM side of the OTU regeneration board at station B reports a R_LOS
critical alarm reports OTU2_BDI minor alarm to the WDM side of station A.

 At the same time, the alarm is still transmitted to the downstream station of
station B. After the alarm is handled at station C, the WDM side of the OTU
board at station C reports the ODU2_PM_AIS minor alarm and sends the
ODU2_PM_BDI minor alarm to the WDM side of station A.
 When the fiber in the transmit direction from station A to station B degrades,
the WDM side of the OTU regeneration board at station B detects bit errors
and reports a OTU2_DEG minor alarm. In this case, the WDM side of station A
will receive the OTU2_BEI.

 At the same time, the alarm is still transmitted to the downstream station of
station B. After the alarm is handled at station C, the WDM side of the OTU
board at station C reports the ODU2_PM_AIS minor alarm and sends the
ODU2_PM_BDI minor alarm to the WDM side of station A.
 ABCD
 Optical-layer overhead alarm detection is implemented by OOS at the optical
layer. Alarms such as R_LOS and MUT_LOS are optical-layer alarms but are not
reported based on OOS detection. Instead, they are reported based on optical
power detection.

 OOS overheads are non-associated overheads. OTS, OMS, and OCh


overheads are transmitted through timeslots 21, 22, and 23 in the optical
supervisory channel (OSC) signal frame structure.

 If the optical supervisory channel is not enabled in the WDM system or the
optical supervisory channel does not carry the OOS overhead (that is, the
optical layer overhead is not enabled), the optical-layer overhead alarm and
performance management cannot be implemented.
 This section describes how an OTU board processes optical-layer alarm signals
and how the alarm signals flow.

 Certain alarms are specific to the OCh, OMS, or OTS optical layer. This section
mainly describes the association relations between the optical-layer alarms
generated by each NE.

 In this scenario, there are two stations. Station A and station B are OTM
stations.

 In this scenario, the OTS, OMS, and OCh trails are between adjacent nodes.
 In this scenario, there are three stations. Station A and station C are OTM
stations, and station B is an OLA station.

 In this scenario, station OLA only amplifies signals and terminates the OTS layer.
The OMS and OCh trails are between station A and station C.
 In this scenario, there are three stations. Station A and station C are OTM
stations, and station B is an ROADM station.

 In this scenario, station ROADM adds and drops certain wavelengths. That is,
certain wavelengths are between stations A and B, or stations B and C, and the
other wavelengths are between stations A and C. Hence, certain OCh trails are
between stations A and B, or B and C, and the other OCh trails are between
stations A and C.
 This example shows the alarm signal flow at the OCh layer. The OCh overhead
is generated and terminated on the OTU board. Therefore, when an optical-
layer alarm is generated at the OCh layer, the OTU board at the downstream
site reports the corresponding alarm.

 When the OTU board at the receive end generates OTU2_SSF and
ODU2_PM_SSF due to R_LOS, the transmit end reports OTU2_BD and
ODU2_PM.
 When the OMS trail that a wavelength traverses is interrupted, the
corresponding OTS reports the MUT_LOS alarm and each OCh trail of the OMS
section reports the corresponding R_LOS alarm.
 When the M40 and D40 boards are configured in the system, the system
supports the reporting of the MUT_LOS alarm and the downstream OTU board
reports the R_LOS alarm.
 This example shows the alarm signal flow at the OSC layer. The OOS overhead
is generated and terminated on the OSC board. Therefore, when an optical-
layer alarm is generated at the OSC layer, the OSC board at the downstream
site reports the corresponding alarm.

 When the OSC board at the receive end generates a OSC_LOS critical alarm,
the OSC board at the transmit end reports a OSC_RDI minor alarm.
 Here fiber is broken between two FIUs, optical layer OTS lost, so receive side
OTS will generate MUT_LOS.

 In OSC generates LOS so here receive side OSC layer generates OSC_LOS and
sends a OSC_BDI to transmit side.

 In OCh layer receive side generate R_LOS.


 Here fiber is broken between OA and FIU, optical layer OTS payload is lost, so
receive side OTS will generate MUT_LOS .

 In OCh layer receive side generates R_LOS.


 A
 To quickly and accurately locate and resolve a fault in the optical transport system,
maintenance personnel must have sufficient background knowledge and service
skills and be familiar with the network and the equipment used.

 Commonly needed transmission system test instruments include optical power


meter, SDH/SONET tester, SmartBits tester, optical spectrum analyzer, OTDR and
communication signal analyzer. Refer to the respective operator guide for more
information.
 The relevant engineering files and documents (which need to be updated
periodically).
 External First, Followed by Internal

 When you locate a fault, first confirm that external conditions are normal. For
example, confirm that no faults occur to fiber, accessing client-side
equipment or power supply.

 Network First, Followed by NE

 When a fault occurs to the transmission equipment, multiple stations, rather


than a single station, might report alarms. At this moment, analyze the fault
to determine the area in which the fault occurs. Based on the analysis, you
can quickly and accurately locate the station in which the fault occurs.

 High-Severity Alarms First, Followed by Low-Severity Alarms

 High-severity alarms should be analyzed first, such as critical alarms and


major alarms. After that, analyze low-severity alarms, such as minor alarms
and warnings.
 The common methods to locate hardware faults can be briefed as "Analyze first,
then perform loopback, and finally replace the board."

 When a fault occurs, first analyze the signal flow, alarms, and performance event
data to determine the possible faulty station or optical section. Then measure the
optical power section by section and analyze the optical spectrum to locate the
fault to a board. Finally, if the fault persists, replace the board or fiber.
 The key point of Service Signal Flow Analysis is that; the maintainer should familiar
to not only the signal flow of one single station which he is maintaining ,but also
the service allocations in whole network. maintainer should at least know about
the service signal flow of a OMS.

 Firstly maintainer should separate whole network into many OMS parts, and
determine the fault OMS of them.

 Secondly ,when the fault OMS is located ,by analysis on service signal flow station
after station, troubles can be locate very conveniently.
 The U2000 fault isolation capabilities can be affected by system faults, and are
classified as follows:

 Comprehensive: You are able to obtain fault information for the entire
network.

 Accurate: You are able to obtain the current alarms, alarm generation time,
and historical alarms of the equipment. In addition, you are able to obtain
the specific values of the performance events.

 Complex: In some cases, a significantly high number of alarms and


performance events can make analysis very difficult.

 Dependent: Fault isolation is entirely dependent on the normal operations of


the computer, software, and communication equipment. If any of these are
faulty, fault isolation capabilities may be limited or entirely lost.
 During troubleshooting, the maintenance personnel either in the NOC or on site
should work jointly to restore the system to normal state.
 The most commonly used test instruments for the WDM system include the optical
power meter, optical spectrum analyzer, SDH/SONET tester, SmartBits tester,
communication signal analyzer and multimeter. Among these tools, the optical
power meter and optical spectrum analyzer are most often used.

 Optical power meter

 The optical power of each point can be obtained from the U2000
performance data. However, to get a precise reading for a specific point, you
must measure the optical power at that point using an optical power meter.

 Optical spectrum analyzer

 Use an optical spectrum analyzer to test the optical spectrum of the output
signal on the MON port of the board. Read from the spectrum analyzer the
optical power, OSNR, central wavelength and analyze the gain flatness of the
OA.

 SDH/SONET tester, SmartBits tester, and communication signal analyzer

 If it is suspected that the poor interconnection is caused by signal error, use


these analyzers to check whether the frame signals and overhead bytes are
normal, and check whether there are any abnormal alarms.

 Multimeter

 If it is suspected that the power supply is too high or too low, use a
multimeter to measure the input voltage.
 There are software loopbacks and hardware loopbacks, which have the following
advantages and disadvantages:

 A hardware loopback is performed on a physical port (optical port) using an


optical fiber. Compared with the software loopback, the hardware loopback
is more reliable. The hardware loopback, however, requires on-site
operations. In addition, a receive optical power overload must be avoided
when performing a hardware loopback.

 Compared with the hardware loopback, the software loopback is simpler but
cannot locate a fault as accurately. For example, during a single station test, if
a software inloop is performed on an optical port and the service is normal,
the line board may not necessarily be normal. However, if a selfloop is
performed on the same optical port using a pigtail and the service is normal,
line board is normal.
 In-loop was applied to locate faults of OTU, out-loop was applied to locate
external faults.

 A proper attenuator must be used to prevent over high optical power from
damaging the optical receiving module when performing hardware loopback.
 The replacement method is used to resolve problems on external equipment, such
as optical fibers, flanges, client equipment, and power supply equipment. It is also
used to handle faults on boards or modules at a single station.

 The advantage of the replacement method is that the fault can be located within a
small range, and is a relatively simple procedure for maintenance personnel to
perform. This method requires that spare components be available. In addition, the
operator must exercise caution when performing operations. For example, when a
board is being inserted or removed, careless operations that could damage board
components should be avoided.
 Answer 1:

 The key to fault locating is to pinpoint a fault to a single station.

 Answer 2:

 Service Signal Flow Analysis, Alarm and Performance Analysis, Fault Analysis
Using Test Instruments, Loopback, Replacement.
 Handling Abnormity of Fibers

 Check whether optical fibers are broken. Check whether connectors are
loosened.

 Check whether the bending radius of the optical fiber is within the allowable
range: 40 mm.

 Handling Abnormal Environment

 If the service interruption occurs regularly, analyze the environmental


conditions upon fault occurs. For example, external interference, abnormal
temperature or bad weather may cause service interruption.

 Handling Line Fault

 Both loopback section by section and meter test can be helpful for
determining a line fault.

 If it is a line fault, switch the important services from the active route to the
standby route so as to resume service as soon as possible.

 Handling Hardware Fault

 Service interruption is usually caused by such hardware faults as board


performance deterioration or failure.

 Board performance deterioration or failure is involved in: OTU, OA, OM/OD.


 The interruption of all services of an NE is classified into three situations:

 The system provides an OSC, the optical amplifier board reports a MUT_LOS
alarm, and the optical supervisory board reports an R_LOS alarm. In this case,
suspect that the optical cable is cut off.

 The system provides an OSC, the optical amplifier board reports a MUT_LOS
alarm, and the OSC board reports no alarm. Suspect that the optical power is
abnormal.

 The system provides no OSC, and the optical amplifier board reports a
MUT_LOS alarm. Suspect that the optical amplifier board or the optical cable
is faulty.

 The possible causes of Single-Channel Service Interruption on the Client Side are:

 The output optical power of the board, interconnected to the OTU, in the
client equipment is abnormal.

 The fiber jumper between the OTU and client equipment is faulty.

 The OTU is faulty.

 If the OTUs in both the local and downstream stations transiently report the
R_OOF and R_LOF alarms at the same time, suspect that the client equipment
is faulty.
 For Single-Channel Service Interruption on the WDM Side:

 If the OTU in the opposite station is faulty or the fiber jumper is dirty, check
whether the input optical power on the client side and the output optical
power on the WDM side of the OTU in the opposite station are normal or not.

 If the fiber jumper between the DEMUX and OTU is dirty or faulty, Check
whether the output optical power of the corresponding channel of the
DEMUX is normal or not. If not, clean or replace the fiber jumper between the
DEMUX and OTU. Then check whether the input optical power of the OTU is
normal or not. If not, clean or replace the fiber jumper between the DEMUX
and OTU.

 If the OTU in the local station is faulty, Check whether the input optical power
of the OTU is normal or not. If yes, the OTU in the local station is faulty, then
replace the OTU.

 If The loopback testing on the OTU in the local station and the optical power
of OTU is normal. the FEC mode of the OTU in the local station and the
opposite station might be inconsistent.
 Too-High Fiber Attenuation
 Whether the interfaces of the ODF, attenuator, flange and optical interface
board are firmly connected.
 Whether the interfaces of the ODF, attenuator, flange and optical interface
board are clean.
 Whether the interface of fiber connector is clean.
 Whether fibers are squeezed.
 Whether the bending radius of fibers is too small and whether fibers are
folded.
 Whether fibers are bundled too tightly.
 For line performance deterioration
 Replacing optical fiber ( Best method ).
 Increasing the optical power at the transmitting end properly (Just for
temporary resolution because increasing optical power may result in non-
linearity and non flatness of signals. ).
 Adjusting the VOA placed before the receiving end to restore the received
optical power.
 Equipment faults
 Replace the faulty unit.
 Possible Causes

 Any fiber jumper in the multiplexing part degrades or is physically damaged.

 The gain of the optical amplifier board in the opposite station or local station
declines.

 Caution: To avoid fiber connection and attenuation problems, note the following
points in installation and maintenance.

 When fibers are coiled, the bending radius cannot be less than 4 cm. Fibers
cannot be folded or bent at 90.

 Fibers should be bundled with binding tapes. Note that the fibers cannot be
bundled too tightly.

 The fibers led out of the equipment should be covered with corrugated pipe.

 The fiber connector should be kept clean. Special cleaning tissue or alcohol
above 98% can be used.

 Methods of fault locating:

 Alarm and performance analysis

 Section-by-section loopback

 Replacement

 Instrument test
 Bit errors refer to the errors that occur during the transmission. Bit errors are
usually represented by bits. On the WDM equipment, the client side of the OTU
board usually monitors only the B1 and B2 bytes of the SDH service.
 Optical power:
 The decrease of optical power affects the OSNR at the receiving end. If OSNR
redundancy is not large enough, the decrease of optical power may directly
lead to the deterioration of OSNR, and thus generating bit error in OTU at
the receiving end.
 Dispersion:
 The dispersion tolerance of a 2.5 Gbit/s optical transmit module is large,
which needs no compensation. But the dispersion tolerance of a 10 Gbit/s
optical transmit module is small (hundreds of ps/nm). As a result, the signal
needs dispersion compensation after being transmitted for a certain distance.
On G.652 fiber, the signal needs dispersion compensation after being
transmitted for a distance of 30 km. On G.655 fiber, the signal needs
dispersion compensation after being transmitted for a distance of 100km.
 Non-Linearity of Optical Fiber
 The non-linear effects of fiber are related to the input optical power to a
large degree. If the input optical power is very high and the transmission
distance is long, non-linearity of fibers may seriously affect the performance
of the system. This results in performance decline at the receive end and bit
errors. As a result, in the 40-wavelength DWDM system, the input optical
power should be limited within 20 dBm.
 Possible cause for single channel bit errors:

 The input optical power of the OTU is abnormal.

 The fiber jumper between the DEMUX and OTU, or that between the OTU
and ODF is aged or bent. Hence, the attenuation of the fiber jumper is too
large.

 The equipment temperature is too high.

 The OTU is faulty.

 The FEC modes of the two interconnected OTUs might be different.

 The Auto-negotiation modes of the two interconnected OTUs might be


different.

 Possible cause for multi-channel bit errors:

 The attenuation of the optical cable or that of the fiber jumper in the
multiplexing part is too large.

 The configuration of the dispersion compensation module (DCM) is improper.

 The equipment temperature is too high.

 The optical amplifier board, MUX or DEMUX is faulty.


 Possible causes for one NE offline problem:

 The NE is out of power supply, or SCC is faulty.

 The network cable between the NE and the HUB is disconnected or the
interface, to which this network cable is connected, of the HUB, is damaged.

 The IP address or ID of the NE is changed by mistake, and there is no two


happens to be the same.

 The network cable between the NE and the WEB-LCT is disconnected or the
interface, to which this network cable is connected, is damaged.

 The AUX is faulty.

 Possible causes for all NEs in a subnet offline problem:

 The gateway NE is faulty.

 The network cable between the gateway NE and the HUB is disconnected or
the interface, to which this network cable is connected, of the HUB, is
damaged.

 The IP address or ID of the disconnected NE is changed by mistake.

 The optical cable is faulty.


 During deployment commissioning, the fiber jumper might be incorrectly
connected because the FIU board and the OSC board have many interfaces
marked with similar names. To prevent incorrect connection when you perform
commissioning in one direction of the OSC board, you can connect an optical
attenuator between the currently unused RM and TM optical interfaces in the
other direction of the OSC board to form a self-loop.

 The equipment networkwide must trace the same clock source. In the case of
WDM equipment, the priority of the internal clock source of a certain NE should be
set to the highest, and the other NEs trace the internal clock source of this NE. (In
the case of chain networking, the internal clock source of a terminal NE rather than
an intermediate NE should be set as the clock source to be traced networkwide.)

 When multiple pieces of equipment are connected by using a HUB or are cascaded
inter subracks, the extended ECC function of the network interfaces of the
equipment is used for communication. If not more than four pieces of equipment
are connected by using one HUB, the automatic extended ECC function can be
enabled. If more than four pieces of equipment are connected by using one HUB,
the manual extended ECC function is recommended for communication to prevent
ECC storms.
 Generally, you can troubleshoot protection faults by checking parameter settings
and alarm analysis methods.
 Answer:

 Service Interruption

 Optical Power Abnormity

 Bit Errors

 Communication Abnormity

 Protection Problems
 Conclusion and Suggestion

 Clean air filters regularly and ensure good heat dissipation of the equipment.

 If the TEMP_OVER alarm is reported, rectify the fault immediately. Check if


the fan speed is set correctly or the air intake vents are blocked.

 Clear alarms on the live network in a timely manner.


 The OA board detects the input and output optical power. If a fault occurs, the
affected services are not only one wavelength. Therefore, the possibility that the
OA board is faulty is low.

 The method of analyzing the signal flow of site B is similar to that of analyzing the
signal flow of site A.
 In the case of WDM equipment service interruption, the common troubleshooting
method is to check boards one by one from the faulty point to the upstream
section.
 The fault is caused by degradation of the receive optical power of the BD-2
TN11NS2 board at site B. Still, the cause of the degraded receive optical power
needs to be identified.
 Transient interruption occurs on the main optical path from site C to site B. An
optical cable fault can be identified because the input optical power received by
the OA and the OSC decreases.
 The hardware required by the OD system includes the following boards installed
on the NE:

 Spectrum analysis boards and optical amplifier (OA) boards: They are used to
obtain optical-layer performance data, monitor all optical signals in a
centralized way without interrupting services, and report the monitored
optical-layer performance data to the OD system.

 Gain-adjustable OA boards: They are used to adjust optical signal


performance parameters.

 Software: The OD system is integrated in the U2000. Users can deliver network-
wide performance monitoring configuration commands using the U2000. After
obtaining the optical-layer performance data reported by each NE, the OD system
analyzes the performance data and graphically displays the analysis result. Based
on the configuration policy, the OD system instructs the OA boards to perform
adjustments and optimize optical-layer performance.
 Operation process:

 The user configures the OD function.

 The OD system delivers configuration commands to the equipment.

 The OD system accurately detects OSNR and other performance data


through the interoperation between hardware and software.

 The equipment reports abnormal events and performance data to the OD


system.

 The OD system obtains optical performance data from the equipment and
graphically displays it.

 The user checks for abnormal events and performance data.

 Optimization is started.

 The OD system triggers optimization of channels with abnormal performance.


 The OD can detect and report the single-wavelength optical power and OSNR at
each site of a 10G and 100G network, and enable users to view the receive- and
transmit-end single-wavelength optical power and OSNR on the OPM8 board at
the receive or transmit end.
 The centralized configuration includes setting monitoring parameters and
configuring automatic monitoring of network changes and historical data backup.

 The OD enables users to set monitoring parameters in a centralized way.


Users do not need to concern for configuration details and the configuration
data is automatically delivered. This feature greatly saves labor costs and
improves configuration efficiency.

 The OD automatically monitors network changes and periodically delivers the


configured monitoring parameters to new services.

 The OD periodically backs up historical data.


 The OD visually displays OCh signal flows, and the optical power and OSNR of the
current E2E OCh trails.
 The OD considers abnormal OCh trails as to-be-optimized OCh trails and records
the abnormal alarm information. Users can manually start optimization and
commissioning on the OCh trails.
 The following describes the process of monitoring main optical path performance
online:

 Using the OD system, a user sends a task of monitoring main optical path
performance.

 The performance monitoring points on the functional boards over the main
optical path start monitoring the specific performance.

 Line loss monitoring: The receive-end OA board detects the line loss
and reports an alarm when the line loss exceeds the design EOL value.

 Monitoring of the optical power of the transmit-end OA board: The OD


system compares the current input optical power of the transmit-end
OA board in an OMS with the nominal optical power, and reports an
alarm if the difference between the current input optical power and
nominal optical power exceeds the threshold.

 Line loss compensation monitoring: The receive-end OA board detects


the line loss and reports an alarm when the difference between the line
loss and the gain value of the OA board exceeds the threshold. Fiber cut
on the main optical path: The OD system checks for the alarm to
determine whether a fiber cut occurs.

 The monitoring information is reported to the OD system on the U2000.

 The OD system processes the monitored performance data and displays it on


GUIs.
 The following describes the process of monitoring optical performance flatness:

 Using the OD system, a user sends a task of monitoring optical performance


flatness.

 The functional boards on the main optical path start monitoring the specific
performance. The OPM8 board is used to scan the optical power of all single
wavelengths on the current OMS section, calculate the average optical power
of all OCh wavelengths in the maintenance state, and determine the
difference between the single-wavelength optical power and average optical
power. The OD system considers that the optical power is flat when the
difference between the single-wavelength optical power and average optical
power does not exceed the alarm threshold.

 The monitoring information is reported to the OD system on the U2000.

 The OD system processes the monitored performance data and displays it on


GUIs.
 The following describes the optimization process in case of abnormal line loss
compensation:

 Fibers are aging and the line loss is increasing.

 The downstream power detection unit, that is, the optical amplifier (OA)
board, detects that the difference between the line loss and OA gain exceeds
the threshold, and therefore reports an alarm to the SCC board on the
downstream NE.

 The SCC board on the downstream NE reports a alarm to the SCC board on
the source node in data communication network (DCN) communication
mode. After receiving the alarm, the SCC board on the source node reports
the alarm to the OD system.

 After receiving the alarm, the OD system locates the OCh trail where the
alarm occurs and adds it to the to-be-optimized trail list.

 Based on the adjustment command delivered by users, the OD system


adjusts the line loss of the OCh trail until the output optical power of the
downstream OA board is within the permitted range.
 The OD system calculates the average optical power based on the optical power of
the transmit-end OA IN port, compares the average optical power with the
nominal optical power, and determines whether the optical power of each
wavelength is proper based on the flatness of the single-wavelength optical power
detected by the OPM board. The following describes the optical power
equalization process:

 When the optical power of the transmit-end OA board, the optical power
flatness, or the OTU receive optical power is out of the permitted range, an
alarm is reported to the OD system.

 After detecting the alarm, the OD system locates the OCh trail where the
alarm occurs, adds it to the to-be-optimized trail list, and records the alarm
information. Then the OD system starts link optimization commissioning on
the OCh trail.

 The OD system adjusts the optical power of the transmit-end OA board at


the transmit site of the OMS section so that the optical power reaches the
nominal value.

 The OD system adjusts the optical power of the intermediate and receive-end
OA boards so that the average optical power is the same as the nominal
optical power and is within the permitted flatness range.

 The OD system adjusts the optical power of the receive-end OTU board so
that the optical power is within the permitted range.
 Answer: ABCD
 After the monitoring parameters of the Optical Doctor (OD) system are set for a
network in centralized mode, the OD system can be used to monitor the network
in real time, report abnormalities, and start optimization commissioning.

 When the U2000 is upgraded by migrating database data using the upgrade tool
UExpert, all U2000 data can be smoothly migrated to the upgraded U2000, and
OD parameters do not need to be set again. If the U2000 is upgraded in another
mode, database data cannot be smoothly migrated to the U2000, and therefore
OD parameters need to be set again after the upgrade is completed.
 Procedure

 Choose Configuration > WDM Optical Management > Parameter


Configuration from the main menu.

 Click the Synchronize Data on the U2000 tab.

 Choose the subnet to be synchronized from the Root navigation tree and
click Start. A confirmation dialog box is displayed. Note: When the installed
WDM commissioning component is used to synchronize data for the first
time, you must select Root to perform network-wide synchronization. In
other scenarios, you can select a subnet to synchronize the subnet data.

 Click Yes. Data synchronization starts. Note: If data share conflicts during
data synchronization on the U2000, maybe another user is deleting,
uploading, copying or checking data consistency on the NE. When this occurs,
perform data synchronization again after another user completes the
operations on the NE. During synchronization, do not perform other
commissioning operations on the NE.

 Click OK in the dialog box that is displayed after the synchronization.


 Configuration Principles

 If the IN port on the receive optical amplifier (OA) board is equipped with a
dispersion compensation module (DCM), calculate the EOL value for the fiber
between the local NE and upstream NE using the following formula: EOL =
Design fiber loss + Maximum insertion loss of the DCM. If an OLP board is
installed in front of the receive OA board, the EOL value for the fiber between
the local NE and upstream NE is equal to the fiber loss between the upstream
OLP board and the local OLP board.

 Ensure that the inter-site fiber type and fiber type are the same as those in
the practical fiber configurations.

 Procedure

 Choose Inventory > Fiber/Cable/Microwave Link >


Fiber/Cable/Microwave Link Management from the main menu.

 In the Fiber/Cable/Microwave Link Management window, click Filter. Clear


the Include internal fibers check box, and click Filter in the Set Fiber/Cable
Browse Filter Criteria dialog box.

 Select one or multiple fibers/cables in the list and click Modify Fiber/Cable.

 In the Modify Fiber/Cable dialog box, set the Length (km), Designed
Loss(EOL)(dB), and Medium Type of the fibers/cables as required, and click
Apply. Note: To perform batch setting, select multiple lines, right-click the
parameter column, and choose Modify in Batchs.
 Configuration Guidelines

 Set System Wavelengths based on the maximum number of wavelengths


supported by the system. If System Wavelengths is not set, the optical
power target value cannot be calculated. If the parameter is incorrectly set,
the optical power adjustment will be incorrect. You can set the value of
System Wavelengths based on the frequency allocation table in the
marketing telecom design documents or based on the actual product
configurations. For example:

 If the WDM subnet is configured with the DWSS9 and EX40 boards, the
System Wavelengths value is 80wave.

 If the WDM subnet is configured with only the EX40 board, the System
Wavelengths value is 40wave.

 For the scenario that signals of different rates traverse the same OA, for
example, when 100 Gbit/s signals are received in a 100G system and the
signals traverse the same OA, set the Rate and Code Type of the OA based
on 100 Gbit/s signals.
 On the U2000, an OCh trail can be in the Unset, Commission, or Maintenance
state. The OD only monitors the trails in the Maintenance state. For an OCh trail
whose deployment commissioning is successful, the OCh trail status is
automatically set to Maintenance Status when the advanced option Set the trail
maintenance state is selected. If the advanced option is not selected, the OCh trail
needs to be manually set to Maintenance Status. For the OCh trail whose
expansion commissioning is successful, the OCh trail status is automatically set to
Maintenance Status.

 Setting the State of OCh Trails in the Manage WDM Trail Window

 Choose Service > WDM Trail > Manage WDM Trail from the Main Menu.

 In the displayed Set Trail Browse Filter Criteria dialog box, select OCh in the
Service Level.

 Click Filter All, and all OCh trails on the live network are displayed.

 Select and right-click a desired OCh trail. Choose Details from the shortcut
menu.

 Set Optical Commission Status in the dialog box that is displayed.


 Procedure
 From the main menu of the U2000, choose Configuration > WDM Optical
Management > Optical Doctor. The Optical Doctor window is displayed.
 Click Parameter Configuration. In the Network Parameter Configuration
dialog box that is displayed, select the desired subnet.

Note: Subnet A containing A1, A2, and A3 is used as an example. Subnet A is


automatically monitored only when A1, A2, and A3 are selected.

 If subnet A contains NEs B1 and B2 in addition to subnet A1, A2, and A3,
you are advised to create subnet A4, classify NEs B1 and B2 into subnet
A4, and then select the desired subnet for monitoring. If subnet A4 is
not created, NEs B1 and B2 will be automatically monitored and the
monitoring cannot be canceled after subnets A1, A2, and A3 are
selected for monitoring.

 If subnet A has a new subnet A4, A4 is monitored only when subnet A is


monitored. The monitoring policy of A4 is the same as that of subnet A.

 Only one monitoring policy is allowed on a network.


 Set the adjustment mode and monitoring parameters, such as the main
optical power, flatness, and OTU input optical power, based on the network
plan.
 Procedure

 In the Network Parameter Configuration window, click Manage Backup.

 In the Manage Performance Data Backup dialog box that is displayed,


configure the data backup policies.

 Click Save.

 Optional: Click Back Up Now. The OD will immediately back up the data of
the selected subnet.
 Right-click a subnet, NE or fiber connection in abnormal state in the OD view and
choose Browse Current Alarms from the shortcut menu. Then the alarms
generated in the subnet, NE or fiber connection are displayed in Alarm Info.
 You can query the to-be-optimized trails of an NE, a fiber connection, or the entire
network in the OD View.

 Querying the to-be-optimized trails of an NE or a fiber connection

 Right-click an NE or fiber connection in abnormal state in the OD view and


choose View to-Be-Optimized Trail from the shortcut menu. The to-be-
optimized trails of the selected NE or fiber connection are displayed in the
Online Optimization Management window. Note: View To-Be-Optimized
Trail is unavailable on the displayed shortcut menu after you right-click the
SPAN_LOSS_EXCEED_EOL, R_LOS, or MUT_LOS alarm record. Optimization
commissioning cannot clear any of the preceding alarms. Instead, the alarms
must be cleared manually.

 Querying the to-be-optimized trails of the entire network

 If the Optimize Management button is marked red, trails to be optimized


are present. Click Optimize Management. In the Online Optimization
Management window that is displayed, you can view networkwide trails to
be optimized, including the trails to be optimized after an abnormality, trails
being optimized, and trails failing to be optimized.
 In the OD view, select an NE or fiber connection. The activated OCh trails
associated with the NE or fiber connection are displayed in the Trail Info. Note:
When associated OCh trails of an NE or fiber connection change, right-click the NE
or fiber connection and choose Browse Relevant Trials from the shortcut menu to
refresh the Trail Info.
 Procedure

 Click Operate and choose Query Fiber Loss. The design loss and actual loss
of all fiber connections are displayed in the OD view. If the current
attenuation of a fiber connection exceeds the design attenuation, the fiber
connection is marked in a color corresponding to a major alarm.

 Optional: Click Operate and choose Compare With History Data. Choose
the historical data to be imported in the Compare With History Data dialog
box that is displayed. Then the historical fiber loss of the entire network is
displayed in the window.
 Procedure

 From the main menu of the U2000, choose Configuration > WDM Optical
Management > Optical Doctor. The Optical Doctor window is displayed.

 On the Alarm Info tab, view all fault information at the optical layer of the
monitored network.

 Optional: Click Filter to filter and display the alarm information on the alarm
info according to the alarm severity and the alarm name.

 On the Alarm Info tab, select an abnormal message. In the trail list, all
associated OCh trails are displayed. Note: The displayed OCh trails are
activated trails. In the trail list, right-click one or more OCh trails and choose
Trail Performance Analysis from the shortcut menu. The Trail Performance
Analysis window is displayed.
 Procedure

 On the Alarm Info tab, right-click an alarm record and choose View To-Be-
Optimized Trail from the shortcut menu. The Online Optimization
Management window is displayed and the trails related to the alarm are
displayed. Note: View To-Be-Optimized Trail is unavailable on the displayed
shortcut menu after you right-click the SPAN_LOSS_EXCEED_EOL, R_LOS, or
MUT_LOS alarm record. Optimization commissioning cannot clear any of the
preceding alarms. Instead, the alarms must be cleared manually.

 Select trails to be optimized and click Optimize.


 Procedure

 From the main menu of the U2000, choose Configuration > WDM Optical
Management > Optical Doctor. The Optical Doctor window is displayed.

 Enter the Trail Performance Analysis window.

 Method 1: Select a site or fiber in OD VIEW. All activated OCh trails


related to the site or fiber are displayed in the Trail Info window.

 Method 2: On the Alarm Info tab, right-click one or more alarm records
and choose View Associated Trail from the shortcut menu. In the trail
list, all associated OCh trails are displayed.

In the trail list, right-click one or more OCh trails and choose Trail
Performance Analysis from the shortcut menu. The Trail Performance
Analysis window is displayed. In the Trail Performance Analysis window, the
Optical Doctor (OD) system visually displays the multiplexed-wavelength optical
power, single-wavelength optical power, single-wavelength optical signal-to-
noise ratio (OSNR), and bit error rate (BER) of the monitored end-to-end (E2E)
trails.

 Select the desired OCh trail and click Analysis. The current performance data
of the trail is displayed.

 Click the Single-Wavelength Power tab. On the tab, query the single-
wavelength optical power of all nodes on the trail.
 Click the Single-Wavelength OSNR tab. On the tab, query the single-wavelength
OSNR of all nodes on the trail.

 The following describes the histograms on the Single-Wavelength OSNR tab:

 The current and historical single-wavelength OSNRs of the OA boards on the


trail are displayed in histograms. The sequence for OSNR bars of all boards
must be the same as that for the boards in the signal flow diagram.
 The following describes the histograms on the Multiplexed-Wavelength Power
tab:

 The current and historical multiplexed-wavelength input and output power of


the OA boards and multiplexer/demultiplexer boards on the trail are
displayed in histograms. The sequence for optical power bars of all boards
must be the same as that for the boards in the signal flow diagram.

 The histograms displays the number of wavelengths that are in the


maintenance state and traverse the boards.

 indicates the current upper and lower limits for the multiplexed-
wavelength optical power of all OA boards except the local wavelength
dropping OA board. When the current input optical power of an OA board is
not within the permitted range, a minor alarm is displayed in the bar for the
OA board. In addition, a minor alarm is displayed for the OA board in the
signal flow diagram.

 If no optical power is obtained for a board, the corresponding bar for the
board is not displayed and the board is marked abnormal in the signal flow
diagram. For the boards that do not support the query of input or output
optical power, no input or output optical power will be displayed for the
boards in the histograms. The multiplexed power cannot be queried on the
TN11WSD9 and TN11WSM9 boards.

 Right-click any area on the Multiplexed-Wavelength Power tab and choose


Display OA Only from the shortcut menu. The histograms only for OA
boards are displayed on the tab.
 On a trail for which the OD route is not configured, the Trail Performance
Analysis window will display the single-wavelength optical power and OSNR of OA
boards according to the values scanned by the MCA/OPM8 board. If no
MCA/OPM8 board is configured for the OA boards, no single-wavelength optical
power or OSNR will be displayed for the OA boards.

 Trail performance analysis can be performed only 10 minutes after an OD route is


configured in the Network Parameter Configuration window.

 The OD obtains device data at an interval of 10 minutes. Therefore, the


performance data is not real-time data.

 Procedure

 In the Trail Performance Analysis window, select the desired OCh trail and
click Analysis. The current performance data of the trail is displayed.

 Optional: The Import the historical data of trail performance window


displays the historical performance of the trail.

 In the signal flow diagram, double-click the desired OA board. The OA Info
dialog box is displayed.

 Click the Single-Wavelength Power tab to view the optical power of all
wavelengths on the OA board.

 Click the Single-Wavelength OSNR tab to view the OSNR of each


wavelength on the OA board.
 Procedure

 In the Trail Performance Analysis window, select one or more trails and click
Back Up Historical Data.

 Set Data Range in the Back Up Historical Data dialog box.

 Add remarks for this backup in Remarks. Note: If you add remarks in
Remarks, the name of the backup data will be in the format of
Date_Remarks.

 Click OK.
 Procedure

 In the Trail Performance Analysis window, select one or more trails. Click
Operate and choose Compare With Historical Data.

 Select the desired historical data in the Compare With Historical Data dialog
box. Note: The subnet backup data or trail backup data can be imported.

 Click OK.
 Optimization is necessary for networks that have been in service for extended
periods of time if the optical power of the existing multiplexed wavelengths
significantly deviates from the nominal values due to aging fibers, malfunctioning
boards, or human errors. The deviation causes the optical power of single
wavelengths to deviate, which in turn hinders expansion commissioning.

 Prerequisites

 Physical and logical fiber connections for to-be-optimized trails must be


correct and consistent.

 BER for OTU boards in the subnet can be queried.

 The BEFFEC_EXC alarm does not exist in the OMS sections of to-be-optimized
trail.

 The trails to be commissioned are complete.

 On a to-be-commissioned trail, the difference between the maximum and


minimum single-wavelength optical power of the original wavelength
scanned by OPM8 cannot exceed 10 dB; otherwise, commissioning cannot be
performed.

 There are OPM8 boards in each OMS section of the to-be-optimized trail and
light is scanned using OPM8 boards. Otherwise, the OMS sections without
OPM8 boards and their upstream OMS sections cannot be optimized.

 OSNR detection has been configured for a commissioning trail, and OSNR
Status of the commissioning trail is set to Enable.
 The Optical Doctor (OD) will automatically optimize abnormal trails in automatic
optimization mode.
 Procedure

 Choose Configuration > WDM Optical Management > Online


Optimization Management from the main menu.

 In the Set Trail Filter Criteria dialog box, set the optimization trail type and
specify the filter criteria to filter trails.

 Click OK. Trails that meet the filter criteria are displayed in the Online
Optimization Management window.

 Select a trail that fails to be optimized and click View Optimization Records.
In the View Optimization Records dialog box that is displayed, view the
detailed information about the trail optimization.
 Procedure

 In OD View, click Operate and choose Export Report.

 In the Export Report dialog box, set Data Type and Data Range and specify
the save path for the report.

 Click Generate.
 Procedure

 In the Trail Performance Analysis window, select one or more OCh trails and
click Export Report.

 In the Export Report dialog box, set Data type and Data range and specify
the save path for the report.

 Click Generate.
 Answer:

 Set basic parameters.

 Set OCh trail status.

 Configure the OD monitoring function.

 (Optional) Configure OD route for a trail.

 (Optional) Configure OSNR detection for a trail.

 (Optional) Configure the OD optimization function for a trail.


 Fiber performance testing is classified into acceptance testing and maintenance
testing based on the test implementation phase. Acceptance testing is performed
when links are offline. With the wide application of fibers, maintenance testing has
become a vital and usual part of the process. Regularly performed maintenance
testing helps detect the fiber performance in a network in a timely manner.
 FD: Fiber Doctor System
 Specific boards include: TN12RAU1, TN12RAU2, TN11SRAU, TN51RPC, etc.
 Hardware

 The
TN12RAU1/TN12RAU2/TN11SRAU/TN51RPC/TN97RPC/TN12ST2/TN11AST2/
TNF1AST4 boards support the line fiber quality monitoring function. They
emit probe light to obtain fiber performance data, receive detection results,
and report the obtained fiber performance data to the FD system.

 Software

 The FD system is integrated on the U2000. After users issue detection


commands on the U2000, the FD system receives the performance data
reported by equipment and graphically displays the data.
 Working process: A user starts fiber quality monitoring -> The FD system delivers
configuration commands to equipment -> The FD system accurately monitor the
fiber performance using the hardware and software -> The hardware reports
monitoring results to the FD system -> The FD system graphically displays the
obtained performance data -> The user queries the monitoring results.
 Specific boards: TN12RAU1, TN12RAU2, TN11SRAU, TN51RPC, TN12ST2,
TN11AST2 or TNF1AST4.

 The fiber quality detection light and OSC channel of the


TN12ST2/TN11AST2/TNF1AST4 board share the same optical source.

 If TN12ST2/TN11AST2/TNF1AST4 uses Offline mode in Advanced parameter


mode for detection, OSC communication will be interrupted. When this
occurs, only fiber quality detection signals are sent and the dynamic
detection range is large. When Online mode in Advanced parameter mode or
Typical mode is used for detection, OSC communication is normal. In this
scenario, the fiber quality detection function works together with the OSC
communication function, and the dynamic detection range is small.
 The following describes the performance parameters that can be set in Advanced
parameter mode modes:

 Pulse width: width of probe optical pulses. When the pulse width increases,
the light emitting time increases and consequently a larger energy is
obtained. This means that a larger dynamic range can be acquired but also
larger dead zones will result.

 Measurement Range: maximum distance for the line fiber quality monitoring
function to sample data. The distance determines the sampling resolution.
NOTE: The total fiber loss will affect the detection distance and the dynamic
range is fixed for the same detection mode. Therefore, a larger total fiber loss
indicates a smaller detection distance. To maximize the detection capability,
the value of Measurement Range is generally greater than the actual
detection capability. When the fault point is beyond the permitted distance
range of a detection mode and cannot be located, select a detection mode
that provides a larger pulse width to locate the fault point.

 Detection time: Measurement is performed multiple times within the


detection time. The average value is obtained based on the measurement
results to improve the measurement precision. The actual duration from the
time when a detection event starts to the time when the detection results are
returned will be longer than the Detection time.

 The pulse width of emitted optical pulses varies according to monitoring modes,
and affects the detection distance and measurement precision. A greater pulse
width provides a larger dynamic range and longer detection distance but poorer
measurement precision.
 Procedure

 On the main menu of the U2000, choose Configuration > WDM Optical
Management > Fiber Doctor.

 In the Filter dialog box that is displayed, set filter criteria so that the desired
fibers are displayed on the main window.

 Select and right-click one or more fibers, and choose Enable OTDR from the
shortcut menu that is displayed.

 Select one or more fibers and click Start Detection and select the desired
detection mode.

 In the Parameter Settings dialog box that is displayed, set monitoring


parameters. Click OK to start the detection, The Progress column displays
the detection progress.
 Set State of the desired fiber to Enable.
 Here, we use the detection results in typical mode/Advanced parameter mode as
an example.
 The large reflection peak at about 9 km from the Raman board indicates that a
PC/UPC fiber is cut. In this scenario, the reflection value is generally greater than -
20 dB.
 An APC fiber cut event occurs at about 5.2 km from the Raman board. In this
scenario, the reflective value is generally less than -45 dB but the attenuation is
greater than 5 dB.
 The large reflection peak at about 0.2 km from the Raman board indicates that the
fiber end face is contaminated. In this scenario, the reflection value is generally
greater than -43 dB.
 The large reflection peak at about 0.45 km indicates that the fiber end face is burnt.
In this scenario, the reflection value is generally greater than -40 dB.
 The above figure is the schematic diagram for an event indicating a large insertion
loss on a connector at about 1.2 km from the Raman board. The insertion loss
shown in the blue circle is close to 3 dB.
 The following figure is the schematic diagram for an interconnection event
between G.652 and G.653 fibers. The interconnection point is at about 0.25 km
from the Raman board. In this scenario, a negative insertion loss will be reported.
 Answer: A user starts fiber quality monitoring. -> The FD system delivers
configuration commands to the equipment -> The FD system accurately monitor
the fiber performance using the hardware and software. -> The hardware reports
monitoring results to the FD system. -> The FD system graphically displays the
obtained performance data. -> The user queries the monitoring results.
 OSI (Open System Interconnection): A framework of the International Organization
for Standardization (ISO) standards. It is used for communication between systems
manufactured by different vendors. The communication process consists of seven
layers arranged based on user relationships. Each layer uses the environment
provided by the lower layer and provides services for the upper layer. The seventh
layer to the fourth layer process the end-to-end communication between the
message source and the message destination, and the third layer to the first layer
process the network function. OSI can be said to be the authority of the network
world! All data communication protocols can correspond to the OSI model.

 Functions of each layer of the OSI RM (Open System Interconnection Reference


Model):

 Physical Layer: Transmits raw bit streams on communication channels to


implement the mechanical, electrical, and functional features and processes
required for data transmission. The physical layer involves definitions of
voltage, cable, data transmission rate, and interface. The main network
devices at the physical layer are repeaters and hubs.

 Data Link Layer: The main task is to control the physical layer, detect and
correct the errors that may occur, enable the network layer to display an
error-free line, and perform traffic control (optional). Traffic control can be
implemented at the data link layer or at the transport layer. The data link
layer is related to the physical address, network topology, cable planning,
error check, and traffic control. The main devices at the data link layer are
Ethernet switches.
 The TCP/IP protocol stack is similar to the OSI reference model. The physical layer
and data link layer involve the original bit stream transmitted on the
communication channel. It implements the mechanical, electrical, and functional
features and processes required for data transmission, and provides measures
such as error detection, error correction, and synchronization, make the network
layer display an error-free line. In addition, traffic control is performed.

 The network layer checks the network topology to determine the optimal route for
transmitting packets and forward data. The key problem is how to select a route
from the source to the destination. The main protocols at the network layer
include Internet protocol (IP), Internet Control Message Protocol (ICMP), Internet
Group Management Protocol (IGMP), Address Resolution Protocol (ARP), and
Reverse Address Resolution Protocol (RARP).

 The basic function of the transport layer is to provide end-to-end communication


between two hosts. The transport layer receives data from the application layer
and divides the data into smaller units when necessary, and transmits them to the
network layer and ensures that the information of each segment is correct. The
main protocols at the transport layer include Transfer Control Protocol (TCP) and
User Datagram Protocol (UDP).

 The application layer processes specific application details. The application layer
displays the received information, sends the user data to the lower layer, and
provides network interfaces for the application software. The application layer
contains a large number of common applications, such as Hypertext Transfer
Protocol (HTTP), remote login (Telnet), File Transfer Protocol (FTP), and Trivial File
Transfer Protocol (TFTP).
 Version: 4 bits. Indicates the IP version. The current version is IP version 4. The next
generation IP version is version 6.

 Header Length: 4 bits. Indicates the length of the IP header.

 Type of Service: 8 bits, which is usually used as QoS. This field can be divided into
two parts: Precedence (3 bits) and TOS (5 bits).

 Total Length: 16 bits. Total length of IP packets, including the IP header.

 Identifier (16 bits) / Flags (3 bits)/Fragment Offset (13 bits): These three parts are
used for IP packet fragmentation.

 Time to Live (TTL): 8 bits. When an IP packet is generated, an initial value is set.
When an IP packet is forwarded from a router to another router, the TTL value
decreases by 1. When the TTL value is 0, the IP packet is discarded.

 Protocol: 8 bits. Indicates the protocol type to which the IP payload belongs. For
example, if the value is 6, it indicates that the TCP segment is encapsulated by IP.

 Header Checksum: 16 bits, which is used for error detection.

 Source/Destination IP Addresses: Indicates the source and destination IP addresses


of IP packets.

 Options: Variable length. This parameter is optional.


 An important part of the IP protocol is that each computer and other devices on
the Internet have a unique address, that is, an IP address. With this unique address,
the user can efficiently and conveniently select the required objects from millions
of computers when performing operations on the computer connected to the
network.

 The IP address has 32 binary bits. Theoretically, 232 IP addresses can be used, that
is, 4.3 billion IP addresses. On the Internet, if each Layer 3 network device, such as
a router, stores the IP address of each node to communicate with each other, you
can imagine how large a routing table is on the router, which is impossible for the
router. To reduce the number of routing tables on the router, make more effective
routing, and clearly distinguish each network segment, you need to adopt a
structured hierarchical solution for IP addresses.

 The IP address structure is divided into the network part and the host part. The IP
address hierarchical solution is similar to the commonly used telephone number.
The phone number is globally unique. For example, for the phone number 010-
82882484, the 010 field indicates the area code of Beijing, and the 82882484 field
indicates a phone number in Beijing. The IP addresses are the same. The previous
network part represents a network segment, and the subsequent host part
represents a device on the network segment.

 IP addresses are designed in a hierarchical manner. In this way, each layer 3


network device does not need to store the IP address of each host, but stores the
network address of each network segment (the network address represents all
hosts in the network segment), the routing table entries are greatly reduced, and
the routing flexibility is increased.
 The IP address is usually identified in dotted decimal notation. That is, the IP
address of 32 bits is divided into four segments, each segment is 8 bits, and each
segment is represented by decimal.

 For each segment, the minimum value is 0 (8 bits are 0) and the maximum value is
255 (8 bits are 1).
 The network part of an IP address is called a network address. The network
address is used to uniquely identify a network segment or aggregation of several
network segments. Network devices in the same network segment have the same
network address. The host part of the IP address is called the host address, and the
host address is used to uniquely identify the network device in the same network
segment. For example, IP address (class A): 10.110.192.111/8, the network part is
10, that is, the network address is 10.0.0.0/8, and the host part is 110.192.111, that
is, the host address is 10.110.192.111/32.
 Class A, class B, and class C addresses are often used. The IP address is allocated
by the International Network Information Center (InterNIC) according to the size
of the company. In the past, class-A addresses are reserved for government
agencies, class B addresses are allocated to medium-sized companies, and class C
addresses are allocated to small companies. However, with the rapid development
of the Internet and the waste of IP addresses, IP address resources are very tight.

 The private IP address is reserved by the InterNIC and is freely controlled by each
enterprise intranet. Using private IP addresses cannot access the Internet directly
because the private IP address cannot be used on the public network, and no
route to the private IP address is available on the public network. As a result,
address conflict occurs. When accessing the Internet, NAT technology are used to
translate private IP addresses into public IP addresses that can be identified by the
Internet.

 NAT: Network Address Translation (NAT) is a standard of the Internet


Engineering Task Force (IETF). It allows an organization with an IP address far
less than its internal network node to enter the Internet. The network address
translation technology converts a private IP address (such as an address in a
192.168.0.0 range) of a router, a firewall, or a personal computer on an
internal private network into one or more public IP addresses of the Internet.
It converts the packet header into a new address and monitors them through
its internal platform. When the information packet is fed back from the
Internet, the network address translation uses these platforms to reverse the
IP address of the user host.
 The IP address is divided into the network part and the host part. How to identify
these two parts?

 To identify the network part and host part of an IP address, you need to combine
the IP address with the address mask. The mask is the same as the IP address with
32 bits length. The masks corresponding to the IP address network part are all "1"
and the masks corresponding to the IP address host part are all "0". By default, the
subnet mask of class A network is 255.0.0.0, the subnet mask of class B network is
255.255.0.0, and the subnet mask of class C network is 255.255.255.0.

 With the address mask, the method of identifying the IP address is as follows:

 Example: 192.168.1.1/255.255.255.0 or 192.168.1.1/24 (number of consecutive 1s


in the mask).
 According to the preceding description, we know IP address is divided into two
parts. The network part is used to uniquely identify a data link, and the host part is
used to identify different hosts connected to the same data link. To construct a
network, you must plan a network address for each link on the network. If classes
A, B, and C are assigned to different links, about 17, 000, 000 links can be
identified by all three types of addresses. That is, there are about 224 hosts with
class A addresses connected to one link, 216 for class B addresses, and 28 for class
C addresses. It is meaningless to allocate IP addresses in this mode.

 To use IP addresses more efficiently, a large class A, B, or C addresses can be


divided into smaller subnets, that is, some host bits are used as network bits. After
a subnet is divided, an IP address can be divided into three parts: Network part,
subnet part, and host part. The masks of the network part and subnet part are all
“1”. With the subnet, the use of network addresses is more effective. Externally,
it is still a network. Internally, it is divided into different subnets.

 For example: If a class C address 192.168.1.17 is in the default state (no subnet
division), that is, a natural mask is used, the network part is 192.168.1.0/24 and can
be allocated to only one data link. In the case of subnet division, for example, the
first four bits of the host part are used as subnet part. In this way, for the address:
The network bit is 24 bits, the subnet bit is 4 bits, and the host bit is 4 bits. A class
C network segment is divided into 24=16 subnets to identify 16 data links. A
maximum of 24-2=14 hosts can be connected to each data link. 192.168.1.17/28
belongs to subnet 192.168.1.16/28.
 For a point-to-point link, two IP addresses can meet the requirements. A 30-bit
mask is used. That is: 255.255.255.252.

 For a broadcast link, the mask length depends on the number of hosts on the
broadcast network. If there are 60 hosts on the network and the mask can be 26
bits, the number of host addresses is 232-26=26=64﹥60. If there are 120 hosts on
the network and the mask can be 25 bits, the number of host addresses is 232-
25=27=128﹥120. For example: 192.168.1.0/26, 172.16.1.0/25, and so on.

 The IP address can be used to identify the device.


 Answer: NE1: Eth 1: (192.168.1.1/30), Eth 2: (192.168.1.5/30)
 At first, MPLS is a protocol used to improve the forwarding speed of routers.
However, MPLS has become an important standard for expanding the scale of IP
networks because MPLS has outstanding performance in the current IP network in
terms of Traffic Engineering and VPN.
 MPLS is a standard routing and switching platform that supports various upper-
layer protocols and services. Multiprotocol means that MPLS can carry multiple
network layer protocols, and MPLS is usually between Layer 2 and Layer 3 of the
network model. A label is a short, easy-to-handle information content that does
not contain topology information and has only local significance.
 MPLS packet forwarding is based on labels. When an IP packet enters an MPLS
network, the edge router on the MPLS ingress analyzes the content of the IP
packet and selects a proper label for the IP packet. Then, all nodes on the MPLS
network use this short label as the forwarding decision basis. When the IP packet
leaves the MPLS network, the label is stripped from the egress edge router.
 The MPLS adopts the dual-plane structure, which consists of the Control Plane and
Forwarding Plane. The control plane is based on connectionless services and
provides powerful and flexible routing functions to meet the network
requirements of various new applications. The control plane is responsible for label
distribution, label forwarding table establishment, and label switching path
establishment and removal. The forwarding plane is also called Data Plane. It is
connection-oriented and supports Layer 2 networks such as ATM and Ethernet.
The forwarding plane adds and deletes labels for IP packets, and forwards received
packets based on the label forwarding table.
 In the packet transport field, the MPLS technology is mainly used to construct
tunnels to carry various types of services transmitted by PWE3 (for example, TDM,
ATM, and Ethernet), to implement end-to-end transmission of services on packet
networks. Therefore, MPLS is not a service or an application. It is actually a tunnel
technology. This technology not only supports multiple upper-layer protocols and
services, but also ensures the security of information transmission to some extent.
 The above figure shows the typical structure of a MPLS network. The basic unit of
a MPLS network is Label Switching Router (LSR). The network area formed by LSRs
is called MPLS Domain. A LSR located at the edge of a MPLS domain and
connected to other networks is called an Label Edge Router (LER). A LSR inside the
Domain is called a Core LSR. If a LSR has one or more adjacent nodes that do not
run MPLS, the LSR is the LER. If all the adjacent nodes of an LSR run MPLS, the LSR
is the core LSR.

 LSR: A Label Switching Router (LSR) is a network device that can exchange MPLS
labels and forward packets. It is also called a MPLS node. All LSRs support the
MPLS protocol.

 LER: The LER is responsible for classifying the packets entering the MPLS domain
into FEC and pressing the labels for the FEC to forward the MPLS packets. When a
packet leaves the MPLS domain, a label is displayed and the packet is restored to
the original packet. Then, the packet is forwarded.

 FEC (Forwarding Equivalence Class): A group of data packets that are processed in
an equivalent manner during forwarding, for example, data packets with the same
destination address prefix. Generally, a same label is allocated to one FEC.

 LSP: After IP packets enter the MPLS domain, different nodes are assigned with
specified labels. Data is forwarded according to these labels, the path that data
flow pass through in the MPLS network is called LSP (Label Switch Path). An LSP is
a unidirectional path, which is the same as the data flow direction.
 A Label Switched Path (LSP), also called a tunnel, is a unidirectional path. LSRs on
an LSP can be classified into the following types:

 Ingress

 An LSP ingress node pushes a label onto the packet for MPLS packet
encapsulation and forwarding. One LSP has only one ingress node.

 Transit

 An LSP transit node swaps labels and forwards MPLS packets according
to the label forwarding table. One LSP may have one or more transits
nodes.

 Egress

 An LSP egress node pops the label and recovers the packet for
forwarding. One LSP has only one egress node.
 Next Hop Label Forwarding Entry (NHLFE): Indicates the next hop label forwarding
entry. NHLFE is the core of the LSR forwarding packets. Each NHLFE contains the
next hop address, interface address, label operation type, and link layer protocol of
the interface. Label operation types include Push (adding a label), Pop (popping up
a label), Swap (switching label), and Null (without changing labels).
 Forwarding Equivalence Class (FEC): A classification and forwarding technology
that classifies packets with the same forwarding processing mode into one
category. Packets of the same FEC are processed in the same way in the MPLS
network. The division methods of FEC is very flexible and can be any combination
of source addresses, destination addresses, source ports, destination ports,
protocol types, and VPN, etc.
 FEC to NHLFE (FTN): The FEC is mapped to the corresponding NHLFE. Only the
ingress node supports this operation.
 Incoming Label Map (ILM): Map the MPLS label to the corresponding NHLFE. Only
transit and egress nodes support this operation.
 The process of forwarding MPLS packets on NE A is as follows:
 Receives the packet, and finding the corresponding LSP ID (101) according to
the FEC to which the packet belongs.
 Searches for the corresponding NHLFE entry based on the LSP ID to obtain
the outbound interface (Port1), next hop (Port2), outgoing label (20), and
label operation type (Push). The label operation type of the ingress node is
Push.
 Adds an MPLS label to the packet, and then sends the encapsulated MPLS
packet to the next hop.
 NG WDM equipment supports the Martini MPLS packet encapsulation format.

 Destination address: The destination MAC address is the MAC address of the
peer interface obtained through the Address Resolution Protocol (ARP). The
MAC address changes in each hop.

 Source address: The source MAC address is the MAC address of the local
interface and changes in each hop.

 802.1q header: NG WDM equipment determines whether the Ethernet frame


at the egress port carries the 802.1q header according to the TAG attribute of
the Ethernet port. When the port attribute is set to Access, the Ethernet
frame does not carry the 802.1q header. When the port attribute is Tag
aware, the VLAN ID in the 802.1q header carried in the MPLS packet is the
Tunnel VLAN ID configured on the NMS. If the Tunnel VLAN ID is not set, the
VLAN ID in the 802.1q header is the default VLAN ID of the NNI port that
sends the MPLS packet (the default value is 1).

 Length/Type: The value is fixed to 0x8847. When detecting that the


Length/Type of an Ethernet frame is this value, the NG WDM equipment
considers that the packet is an Ethernet frame that carries an MPLS packet.
The NE does not check the MPLS packets on the ingress port according to
the TAG attribute and the VLAN ID of the LSP.

 MPLS packet: Consists of MPLS labels and Layer 3 user packets.

 Frame Check Sequence (FCS): It is used to check the correctness of the


Ethernet frame.
 A label is a short and fixed-length identifier that uniquely identifies the FEC to
which a packet belongs. In some cases, for example, to perform load balancing,
one FEC may have multiple labels, but one label on a router can represent only
one FEC. The label is similar to the VPI/VCI of ATM and the DLCI of Frame Relay,
and is a connection identifier. The length of the label is 4 bytes, 32 bits.

 The label has four domains:

 Label: 20 bits, indicating the label value.

 EXP: 3 bits, used for extended purposes. EXP is used to classify MPLS packets
on the NG WDM equipment, which is similar to the VLAN priority specified in
the IEEE 802.1q protocol.

 S: 1 bit, stack bottom flag. MPLS supports multi-layer labels, that is, label
nesting. If the value of S is 1, it indicates that the label is the bottom label.

 TTL: 8 bits, which is the same as the Time To Live (TTL) in IP packets.

 Labels are encapsulated between the link layer and the network layer. Therefore,
labels can be supported by any link layer.
 A label stack refers to an ordered set of labels. The label next to the Layer 2 header
is called the top label or outer label, and the label next to the IP header is called
the bottom label or inner label. Theoretically, an unlimited number of MPLS labels
can be stacked. The label stack organizes labels in Last In First Out mode and
processes labels from the top of the stack.

 Label space: The value range for label distribution is called a label space. Two types
of label space are available:

 Per-Platform Label Space

 An LSR uses one label space; that is, the labels are unique per LSR.

 Per-Interface Label Space

 Each interface on an LSR uses a label space; that is, the labels are unique
per interface, but can be repeated on different interfaces.

 The equipment supports only the per-platform label space. Ingress labels and
egress labels must be unique per NE.
 Answer: LSRs on an LSP can be classified into the following types:

 Ingress

 An LSP ingress node pushes a label onto the packet for MPLS packet
encapsulation and forwarding. One LSP has only one ingress node.

 Transit

 An LSP transit node swaps labels and forwards MPLS packets according
to the label forwarding table. One LSP may have one or more transits
nodes.

 Egress

 An LSP egress node pops the label and recovers the packet for
forwarding. One LSP has only one egress node.
 PWE3 (Pseudo Wire Emulation Edge-to-Edge) is one of the solutions proposed to
combine traditional communication networks with existing packet networks. PWE3
is a Layer 2 service bearer technology that simulates the basic behaviors and
characteristics of ATM, FR, Ethernet, low-speed TDM circuits, and SONET/SDH
services on a PSN. That is, PWE3, as an end-to-end Layer 2 service bearer
technology, provides a point-to-point L2VPN service on a public network, and
transmits various services (FR, ATM, Ethernet, and TDM SONET/SDH) through a
PSN, end-to-end virtual link emulation is provided at the PSN boundary. Therefore,
a traditional network can be interconnected with a packet switching network by
using the PWE3 technology, so as to implement resource sharing and network
expansion.

 The concepts in PWE3 are as follows:

 Attachment Circuit (AC)

 A link or virtual link between the Customer Edge (CE) and the Provider
Edge (PE).

 Pseudo Wire (PW)

 A virtual connection between two PEs, which transmits frames between


two PEs. The PE uses signaling to establish and maintain the PW, and
the PW status information is maintained by the two endpoint PEs of the
PW.
 PWE3 is a processing mechanism that emulates the key attributes of a service and
transmits the service through a tunnel (IP/MPLS) on the PSN network.

 The internal data service carried by the PW is invisible to the core network, in other
words, the core network is transparent to the CE data flow. That is, the PW
emulation process must ensure the original attributes of the service as much as
possible.
 The Ethernet service forwarding process on the local PE (PE1) is as follows:

 Extracts local Ethernet service packets transmitted from department A and


department B from the AC.

 Pre-processes the service payloads before PWE3 emulation.

 The forwarder maps the service payloads to the corresponding PWs.

 Encapsulates data transmitted on PWs into PWE3 packets in standard format,


including generating control words and adding PW label and Tunnel label to
data.

 Maps PWs to a tunnel for transmission.

 The Ethernet service forwarding process on the remote PE (PE2) is as follows:

 Demultiplexes the PW from the tunnel.

 Decapsulates the PW, and removes the Tunnel label, PW label, and control
word.

 Extracts service payloads from PWs.

 Restores the service payload to the local Ethernet service packet.

 The forwarder selects the AC that forwards the packet and forwards the
packet to department A and department B.
 The specific PWE3 encapsulation format varies slightly according to the type of
emulated service, but a generic encapsulation format is also available.

 A PWE3 packet contains the MPLS label, control word (Optional), and payload.

 MPLS Label

 The MPLS labels include tunnel labels and PW labels, which are used to
identify tunnels and PWs respectively. The format of the tunnel label is the
same as that of the PW label.

 Control Word

 The 4-byte control word is a header used to carry packet information over an
MPLS PSN.

 The control word is used to check the packet sequence, to fragment packets,
and to restructure packets.

 Payload

 Payload indicates the payload of a service in a PWE3 packet.


 Layer 2 virtual private network (L2VPN) defined by IETF includes the virtual private
wire service (VPWS) and virtual private LAN service (VPLS). VPWS is used to
provide point-to-point service at Layer 2 and VPLS is used to simulate a local area
network (LAN) in a wide area network (WAN).

 VPWS is a Layer 2 virtual private network (VPN) technology for point-to-point


transmission. It performs one-to-one mapping between a received attachment
circuit (AC) and a pseudo wire (PW). By binding ACs and PWs in the <AC, PW, AC>
format to form a virtual circuit, VPWS achieves transparent transmission of Layer 2
services between users

 VPLS is a Layer 2 VPN technology for simulating LANs. Using VPLS, each L2VPN
considers an NE as a virtual switching instance (VSI), and this VSI is used to achieve
mapping between multiple ACs and PWs, and connect multiple Ethernet LANs so
that the LANs work as if they are one LAN.
 Answer: PWE3 is a Layer 2 service bearer technology that emulates the basic
behaviors and characteristics of services such as Ethernet on a packet switched
network (PSN). Aided by the PWE3 technology, conventional networks can be
connected by a PSN. Therefore, resource sharing and network scaling can be
achieved.
 MPLS-TP is an extension of MPLS and meets transport network requirements,
especially OAM and protection switching.

 MPLS-TP has become a mainstream technology in the process of moving mobile


bearer networks to all-IP networks.
 The formula is described as follows: MPLS-TP is a subset of MPLS. It removes
connectionless IP-based forwarding and adds end-to-end OAM functions.
 The MPLS-TP packet transport network uses the ASON (automatic switching
optical network) architecture. Therefore, the MPLS-TP packet transport network
still consists of the transport plane (user/data plane), management plane, and
control plane, the three planes are independent of each other.

 The transport plane adapts and forwards customer data and signaling data
based on MPLS-TP labels, and provides connection-oriented O&M
management (OAM) and protection restoration functions.

 The main function of the control plane is to establish a label forwarding


channel through the signaling mechanism, and distribute labels.

 The management plane implements the management functions of the


transport plane, control plane, and entire system, and provides collaborative
operations between these planes.

 In the MPLS-TP packet transport network architecture, MPLS-TP does not need to
redefine the functions provided by IP/MPLS, but uses the data plane data
processing process defined by IETF for MPLS and PWE3, therefore, the MPLS-TP
transport plane is based on MPLS and PWE3, but its OAM capability needs to be
enhanced.

 The MPLS-TP control plane uses the generalized multiprotocol label switching
(GMPLS) protocol of the IETF to implement its functions. Its control and data
transmission are more coupled.

 Separating the data transport plane from network resource management enables
the MPLS-TP transport plane to be independent of its service network and related
control network (management plane and control plane), facilitating network
construction and capacity expansion.
 In the MPLS-TP OAM protocol model, a network is divided into three layers:
section layer, tunnel layer, and PW layer. Each layer is described as follows:

 The section layer serves the tunnel layer.

 The tunnel layer is a client of the section layer and serves the PW layer.

 The PW layer is a client of the tunnel layer and serves services.

 Fault management includes:

 Continuity Check (CC)

 Remote Defect Indication (RDI)

 Alarm Indication Signal (AIS)

 Fault locating includes:

 Loopback (LB)

 Link Trace (LT)

 Performance monitoring includes:

 Loss Measurement (LM)

 Delay Measurement (DM)


 Tunnel APS is a function that protects tunnels using the APS protocol. In a tunnel
APS protection group, when the working tunnel is faulty, the service can be
switched to the preconfigured protection tunnel, improving the reliability of
transmitting services over tunnels.
 Tunnel APS has the following characteristics:
 Tunnel APS is a tunnel-level end-to-end protection.
 Tunnel APS can determine whether to perform switching based on physical
layer detection and link layer detection.
 Physical layer detection: Detect signal loss at the microsecond level.
Faults at the physical layer will cause link layer faults.
 Link layer detection: MPLS-TP OAM is used for detection. After
detecting a fault, the ingress and egress nodes exchange APS protocol
packets to implement protection switching.
 PW APS is a function that protects PWs . In a PW APS protection group, when the
working PW is faulty, the service can be switched to the preconfigured protection
PW, improving the reliability of transmitting services over PWs.
 PW APS has the following characteristics:
 PW APS is a PW-level end-to-end protection.
 PW APS can determine whether to perform switching based on physical layer
detection and link layer detection.
 Physical layer detection: Detect signal loss at the microsecond level.
Faults at the physical layer will cause link layer faults.
 Link layer detection: MPLS-TP OAM is used for detection. After
detecting a fault, the ingress and egress nodes exchange APS protocol
packets to implement protection switching.
 Answer:

 The IETF and ITU-T define a protocol set, that is, Multiprotocol Label
Switching Transport Profile (MPLS-TP), for transmitting packet services by
using the MPLS technology on the existing transport network. MPLS-TP is
compatible with existing MPLS standards and extends to transport networks.

 The protection type can be tunnel APS or PW APS.

 Tunnel APS includes MPLS Tunnel APS 1+1 protection and MPLS Tunnel
APS 1:1 protection.

 PW APS includes PW APS 1+1 protection and PW APS 1:1 protection.


 The L2 layer implements Ethernet/MPLS-TP switching, the L1 layer implements
ODUk/VC switching, and the L0 layer implements λ switching, which is Huawei's
next-generation intelligent optical transport platform.

 The OTN packet equipment integrates the L0/L1/L2 technology plane and
modular design. The equipment can be combined into a single OTN
equipment, a single packet equipment, and a hybrid equipment to flexibly
meet the actual service bearing requirements.

 The OTN packet equipment supports WDM and 40G/100G. The bandwidth
can be expanded infinitely to meet the bandwidth increase requirement.

 The OTN packet equipment supports the SDH plane, which meets the
smooth evolution of the SDH network on the live network. The live network
gradually transits from the SDH network to the OTN network to protect the
investment.

 The OTN packet equipment is ONE-BOX equipment provides L0/L1/L2 to


construct a simpler and more reliable network.

 The OTN packet equipment can select L2 packet convergence to meet high
bandwidth utilization requirement, select L1 fixed pipes to meet high security
requirement, select the L0 wavelength to meet the high bandwidth
requirement according to the service traffic characteristics. Flexible selection
to build a sustainable and efficient bearer (transport) network.
 Comprehensive bearer for large bandwidth and multiple services:

 Large bandwidth: The OTN optical path technology provides large bandwidth.

 Multi-service: The MPLS-TP PWE3 emulation technology implements unified


bearing of multiple services and integrated bearing of mobile, broadband,
and private line services.

 Large bandwidth and coarse-grained services, such as PON, SAN, and


enterprise private line services are transported by OTN.
 MS-OTN equipment makes network construction flexible and easy to expand.

 Provide full-granularity pipes:

 L0: Wavelength λ 10G/40G/100G

 L1 pipe: VC-n/ODUk

 L2 pipe: Any bandwidth PW/LSP pipe

 Provide rigid and flexible pipes:

 Rigid pipe (λ +ODU+VC): High reliability and high security.

 Flexible pipe PW/LSP: The bandwidth can be configured and dynamically


adjusted. Flexible planning and low cost.

 Flexible network planning: The bandwidth of the packet L2 pipe can be configured
and adjustable. The network service topology supports P2P, P2MP, and MP2MP.

 Easy network expansion: Modular design + centralized scheduling, non-blocking


service direction, and arbitrary scheduling

 Efficient service transmission: Low-rate services, such as E1/FE/GE are integrated


and multiple services share a pipe to improve bandwidth utilization. 10G/100G
high-speed services use L0 to implement efficient forwarding, low latency, and
highly reliable transmission

 SDH smooth evolution: Smooth inheritance of traditional SDH services


 Answer:

 Unlimited bandwidth expansion

 Flexible service adaptation

 Simple and reliable network


 E-Line: It refers to any Ethernet service based on the point-to-point Ethernet
Virtual Connection (EVC).
 The above figure shows the networking of the E-Line services carried by the
Pseudo Wire (PW).
 Companies A and B have branches in both places. The branches of each company
need to communicate with each other. The services between the two companies
need to be isolated. In this case, you can configure the UNI-NNI E-Line services
carried by PWs to meet the communication requirements between branches of
Company A and Company B. In addition, because different services are carried by
different PWs, so the services between the two companies can be isolated.
 The services accessed by the client side are encapsulated into the PW and
then carried by the tunnel.
 The private line services of different companies are carried by different PWs
and are transmitted to the same port on the network side. In this way, the
ports on the network side are saved and the bandwidth utilization is
improved. In the upstream direction of the client side, hierarchical QoS can
be configured for data packets.
 Advantages:
 Services can be isolated by PWs and MPLS tunnels.
 Supports various QoS policies.
 Services can support MPLS tunnel APS protection.
 Disadvantages:
 All the NEs on the PSN must support MPLS.
 E-LAN: It refers to any Ethernet service based on the multiple point- multiple point
Ethernet Virtual Connection (EVC).

 The three branches of a company are located at NE1, NE2, and NE3. The branches
need to share information with each other. In this case, the E-LAN services carried
by PWs can be configured.
 The Ethernet services Service1 and Service2 that do not carry the VLAN ID or carry
the unknown VLAN ID are accessed to NE1 through Port1 and Port2 respectively.
Port 1 and port 2 transparently transmit Service1 and Service2 to Port3 and Port4
respectively. Port3 and Port4 transmit Service1 and Service2 to NE2. NE2 processes
services in the same way as NE1.
 Company A and Company B have branches in both places. The branches of each
company need to communicate with each other. In this case, you can configure the
UNI-NNI E-Line services carried by ports to meet the communication requirements
between branches of Company A and Company B.

 In this case, each branch of company A and company B can exclusively occupy a
client side port. Each physical port on the network that the private line passes
through is shared by the services with different VLANs. For a single station,
complex traffic classification can be performed on data packets in the upstream
direction of the client side, and different QoS policies can be used based on traffic
classification.

 The advantages of E-Line services carried by ports are as follows:

 The working mechanism is simple.

 Services are isolated by VLAN in network side.

 Disadvantages:

 A maximum of 4096 VLAN IDs are supported. The range is 1-4095.

 QoS policies can be applied only to a single service flow.

 Services are not protected.


 Companies A and B have branches in both places. The branches of each company
need to communicate with each other. The services between the two companies
need to be isolated. The internal VLANs of Company A are from 1 to 100, and the
internal VLANs of Company B are from 1 to 200. In this case, you can configure the
UNI-NNI Ethernet services carried by QinQ links to meet the communication
requirements between branches of Company A and Company B. Different services
are carried by QinQ links with different SVLAN values. In this manner, services are
isolated from each other and VLAN resources on the packet switching network are
saved.

 In this case, packets from different companies connected to the client side are
added with different SVLANs, and then are carried by the same link on the network
side. Private line services of different companies are added with an S-VLAN tag
and sent to the same port on the network side. This saves network-side ports and
improves bandwidth utilization. In addition, only a small number of VLANs are
occupied on the packet switching network, which saves VLAN resources on the
network. You can configure QinQ policies to implement QoS for services carried by
QinQ links.

 Advantages:

 The working mechanism is simple.

 Multiple QoS policies can be used.

 Services can be isolated by service VLAN, S-VLAN, and port.

 Disadvantages:

 No protection.
 The above figure shows the typical application scenario of the service model. The
transmission network needs to carry the A service that is accessed by NE2 and NE3,
and the A service is converged and interacted on the convergence node NE1.
Because service isolation is not required, the IEEE 802.1d bridge is used at the
convergence node NE1 to implement service grooming.
 The above figure shows the typical application scenario of the service model. The
transmission network needs to carry the G and H services accessed by NE2 and
NE3. The two services are converged and interacted on the convergence node NE1.
The G and H services use different VLAN planning. Therefore, the 802.1q bridge is
used on each node and the sub-switching domain is divided according to the
VLAN. In this way, the two services are differentiated and isolated.

 You can also configure VLAN-based E-Line services on NE2 and NE3 to access
services.
 The above figure shows the typical application scenario of the service model. The
transmission network needs to carry the G and H services accessed by NE2 and
NE3. The services are converged and interacted on the convergence node NE1. The
G and H services use the same C-VLAN planning. Therefore, you need to add S-
VLAN tags to the two types of services to differentiate and isolate services.

 You can also configure QinQ-based E-Line services on NE2 and NE3 to access
services.
 1. Answer: Supports E-Line and E-LAN services.

 2. Answer:

 E-LAN services carried by PWs

 IEEE 802.1d bridge-based E-LAN services

 IEEE 802.1q bridge-based E-LAN services

 IEEE 802.1ad bridge-based E-LAN services


 Tunnel APS is a function that protects tunnels using the APS protocol. In a tunnel
APS protection group, when the working tunnel is faulty, the service can be
switched to the preconfigured protection tunnel, improving the reliability of
transmitting services over tunnels.

 Tunnel APS has the following characteristics:

 Tunnel APS is a tunnel-level end-to-end protection.

 Tunnel APS can determine whether to perform switching based on physical


layer detection and link layer detection.

 Physical layer detection: Detect signal loss at the microsecond level.


Faults at the physical layer may cause link layer faults.

 Link layer detection: MPLS-TP OAM is used for detection. After


detecting a fault, the ingress and egress nodes exchange APS protocol
packets to implement protection switching.

 The link layer detection mode of tunnel APS is implemented through MPLS-TP
OAM. Before creating APS protection, you need to enable MPLS-TP OAM for the
tunnel. The packet detection period of MPLS-TP OAM is usually set to 3.3 ms. The
APS protocol can be enabled only after the tunnel APS protection group is
configured at both ends. Otherwise, the MPLS_TUNNEL_OAMFAIL alarm is
generated.

 Tunnel APS cannot be used with PW APS at the same time. That is, if Tunnel APS is
configured for the tunnel to which a PW belongs, PW APS cannot be configured
for the PW. If PW APS is configured for a PW, Tunnel APS cannot be configured for
the tunnel to which the PW belongs.
 In Tunnel APS 1+1 protection, services are dually fed to the working and
protection channels at the transmit end, and are selectively received at the receive
end. When the device detects that the working channel fails, the receive end
selects the protection channel to receive services.

 MPLS-TP OAM is used for detection. After detecting a fault, the ingress and egress
nodes exchange APS protocol packets to implement protection switching.

 The APS protocol is transmitted through the protection channel and transmits the
protocol status and switching status to each other. After detecting a fault at the
service detection point, the equipment at both ends performs service switching
and selective receiving according to the protocol status and switching status.

 The switching modes of the Tunnel APS 1+1 protection are as follows: Single-
ended switching and dual-ended switching.
 Although Tunnel APS 1:1 Protection is also the working channel and the protection
channel correspond to each other and protects each other, but the services are not
transmitted on the two channels at the same time. 1:1 protection is a special case
of 1:N protection. 1:N means that one protection channel protects N working
channels at the same time.

 As shown in the preceding figure: There are two MPLS tunnels in the figure. The
solid line indicates the working tunnel, and the dashed line indicates the
protection tunnel. In normal cases, services are transmitted through the working
tunnel, while the protection tunnel is used to transmit the APS protocol.

 MPLS-TP OAM tests the connectivity of each unidirectional MPLS tunnel. The
source site sends connectivity test frames intermittently. When receiving test
frames, the sink site verifies the connectivity and checks whether the tunnel is
connected.

 The Tunnel APS 1:1 switching mode supports only dual-ended switching.
 PW APS is a function of protecting PWs based on the APS protocol. When the
working PW is faulty, services can be switched to the preset protection PW. In this
manner, important services are protected and the reliability of the PW
transmission service is improved.

 PW APS has the following characteristics:

 PW APS is a PW level end-to-end protection.

 PW APS can determine whether to perform switching based on physical layer


detection and link layer detection.

 Physical layer detection: Detect signal loss at the microsecond level.


Faults at the physical layer may cause link layer faults.

 Link layer detection: MPLS-TP OAM is used for detection. After


detecting a fault, the ingress and egress nodes exchange APS protocol
packets to implement protection switching.

 MPLS-TP PW OAM must be enabled for both the working and protection PWs of a
PW APS protection group, and the detection packet period must be set to 3.33ms.
 In PW APS 1+1 protection, one protection PW is used to protect the working PW.
The 1+1 protection adopts the dual fed and selective receiving mechanism. The
service is switched to the protection PW only when the working PW fails.

 MPLS-TP OAM is used to detect MPLS-TP OAM. After detecting a fault, the ingress
and egress nodes exchange APS protocol packets to implement protection
switching.

 The PW APS 1+1 protection switching modes are as follows: Single-ended


switching and dual-ended switching
 In PW APS 1:1 protection, one protection PW is used to protect the working PW. In
normal case, The service is transmitted only on the working PW. The service is
switched to the protection PW only when the working PW fails.

 As shown in the preceding figure, the solid line indicates the working PW, and the
dashed line indicates the protection PW. In normal case, The service is transmitted
only on the working PW, and the protection PW is used to transmit the APS
protocol.

 MPLS-TP OAM is used for detection. After detecting a fault, the ingress and egress
nodes exchange APS protocol packets to implement protection switching.

 PW APS 1:1 protection switching mode: Dual-ended switching

 Bridging means that the service transmitted to the working PW in the normal state
is switched to the protection PW.

 Switching means that the service received from the working PW in the normal
state is switched to receive from the protection PW.
 With the wide application of Ethernet technologies in metropolitan area networks
(MANs) and wide area networks (WANs), carriers raise higher requirements on the
bandwidth and reliability of backbone links that use Ethernet technologies. In
traditional technologies, upgrading hardware is often used to increase the
bandwidth of Ethernet links. However, this solution requires high costs and is not
flexible enough. The link aggregation technology solves these problems.
 A logical link formed after bundling of multiple physical links is called a Link
Aggregation Group (LAG).

 In an Ethernet network, a link corresponds to a port. Therefore, link aggregation is


also called port aggregation.
 The Link Aggregation Control Protocol (LACP) based on the IEEE 802.1ax standard
provides a standard negotiation mode for the equipment that exchanges data. The
system automatically generates an aggregated link based on the configuration
and starts the aggregated link to transmit and receive data. After the aggregation
link is formed, the link status is maintained. When the aggregation condition
changes, the link aggregation is automatically adjusted or disbanded.

 LAG is classified into the following types:

 Manual aggregation

 A user manually creates a LAG. When a member port is added or


deleted, the LACP protocol is not used. A port can be in the Up or Down
state. You can determine whether to perform aggregation based on the
physical status (Up or Down) of the interface.

 Compared with static aggregation, aggregation control is not accurate


and effective.

 Static aggregation

 A user creates a LAG. When a member port is added or deleted, the


LACP protocol is used. A port can be in the selected (active), Unselected
(inactive), or Standby (backup) state. The LACP protocol is used to
exchange aggregation information between devices to reach an
agreement on the aggregation information.

 Compared with manual aggregation, aggregation control is more


accurate and effective.
 Load Sharing mode (increasing the link capacity)

 LAG provides users with an economical method to increase the link capacity.
By binding multiple physical links, users can obtain more bandwidth data
links without upgrading existing devices. The capacity of a data link is equal
to the total capacity of physical links. The aggregation module allocates
service traffic to different members based on the load balancing algorithm to
implement link-level load balancing.

 All member links in a LAG share traffic and share load.

 To ensure load balancing over member links in a LAG, hash algorithms are used.
The hash algorithms allocate traffic based on:

 MAC addresses, including source MAC addresses, destination MAC addresses,


and source MAC addresses plus destination MAC addresses

 IP addresses, including source IP addresses, destination IP addresses, and


source IP addresses plus destination IP addresses

 MPLS labels

 When a LAG member changes or some links are faulty, the system automatically
reallocates traffic.
 Non-Load Sharing mode (improving link availability)

 In a LAG, members dynamically back up each other. When a link is


disconnected, other members can quickly take over the work of the faulty
link. The backup process of link aggregation is only related to the links in the
aggregation group and is irrelevant to the links outside the aggregation
group.

 The LAG has only one member link as the active link that carries traffic. The other
member link does not have traffic and is in the ready state.

 When an active link in a LAG fails, the system takes the member link in ready state
as the active link to recover the link failure.

 When the LAG is configured to work in non-load sharing mode, the revertive mode
can be set to Revertive or Non-Revertive.

 When this parameter is set to Revertive, services are automatically switched


back to the working link after the working link recovers.

 When this parameter is set to Non-RevertiveIn, after the working link


recovers, the LAG remains unchanged and services are still transmitted over
the protection link.
 MC-LAG uses the MC-LAG control protocol to aggregate multiple inter-chassis
data links that connect to the same device to provide a more reliable connection.
MC-LAG is an extension of single-chassis LAGs. MC-LAG allows links of multiple
NEs to be aggregated to form a link aggregation group (LAG). When a link or an
NE fails, MC-LAG automatically switches services to another available link in the
same LAG, in this way, link reliability is enhanced.
 As shown in the figure, services from the DSLAM are transmitted to the Router
over the PSN; NE1 and NE2 work with the Router to provide MC-LAG protection
for services.
 An MC-LAG protection scheme in dual-homing protection consists of the
following parts:
 Single-chassis (SC) LAGs on NE1 and NE2, that is, LAG1 and LAG2
 An MC-LAG between NE1 and NE2
 A LAG on the Router, that is, LAG3
 NE1 and NE2 communicate with each other by means of the inter-chassis
synchronous communication tunnel. Specifically, the two NEs periodically
exchange information about the status of LAG1 and LAG2 and negotiate the
active/standby status of LAG1 and LAG2 based on fault conditions.
 The configuration requirements for the LAG on the equipment are as follows:
 The intra-device LAG supports two working modes: Non-load sharing and
Load sharing. The working modes of the LAG1 LAG2 must be the same.
 The intra-device LAG supports two aggregation methods: Manual
aggregation and Static aggregation. The aggregation methods of LAG1,
LAG2 and the LAG3 on the interconnected equipment must be the same.
 The configuration requirements for MC-LAG are as follows: The MC-LAG supports
the revertive mode and non-revertive mode after the primary link recovers from a
fault. The revertive mode of the MC-LAG must be the same as that of the
interconnected LAG3.
 The MC-LAG protocol channel can be a DCN channel or an MPLS channel.

 MC-LAG can only be set to non-load sharing mode.

 LAG1 and LAG2 can be configured to load sharing mode or non-load sharing
mode. However, the LAG modes on different NEs must be the same.
 Answer: NG WDM equipment provides a series of packet-based network-level
Ethernet protections, including PW APS, Tunnel APS, LAG, and MC-LAG.
 Company A and company B have two branches in the City 1 and City 2 respectively.
Each braches need to communicate with another, the service of branch A and
branch B need to be separated with each other. In this case, we use two PWs to
meet the requirement.
 The EM20 board is a packet board. Receives a maximum of 8 x 10GE and 12 x GE
or 20 x GE/FE services, processes the packet services, and transmits the packet
data packets to the cross-connect board for centralized grooming at the
equipment level.

 The HUNQ2 board supports hybrid transmission of OTN, SDH, and packet services.
Supports Layer 2 switching, OTN interfaces, and SDH overhead processing. The
board supports hybrid transmission of ODU0, ODU1, ODUflex, ODU2, ODU2e,
STM-16, and packet service signals whose total bandwidth does not exceed
40Gbit/s. One optical port supports hybrid transmission of ODU0, ODU1, ODUflex,
and STM-16 signals.
 For the OSN 1800V packet configuration, when the universal line board HUNQ2 is
used as the NNI port, you need to configure the mapping between the virtual port
and ODUk timeslot.
 IP address planning principles: The ports at both ends of a link must belong to the
same subnet. The port IP addresses of different links cannot be in the same subnet.
In addition, the IP address must be different from the LSR ID.
 In the navigation tree, click an NE and choose Configuration > Group
Configuration > MPLS Management > Basic Configuration. The LSR ID
configuration page is displayed.

 Set the basic information about the LSR ID according to the network planning
information:

 NE13 LSR ID: 172.16.0.13.


 When the universal line board HUNQ2 is used as the NNI port, set the virtual port
to the ETH type and then set the ODUk timeslot mapping.

 Choose HUNQ2 from the main menu. Choose Configuration > Virtual Port
Mapping Management from the main menu. On the Virtual Port Mapping
Management Configuration page, perform the following operations:

 Port type: ETH.

 The bandwidth is the client-side service bandwidth. In this example, GE is


used.

 Click Add.

 Select the 40001 port mapping: Set this parameter to HUNQ2-IN1/OUT1-


OCH:1-ODU2:1-ODU1:1-ODU0:1 according to the planning.

 Select the 40002 port mapping: Set this parameter to HUNQ2-IN2/OUT2-


OCH:1-ODU2:1-ODU1:1-ODU0:1 according to the planning.

 Click Apply.

 After the setting, search for optical-layer OCh trails as the physical layer for
carrying Ethernet services. Choose Service > WDM Trail > Search WDM Trail
from the main menu.
 UNI ports are UNI-NNI ports. For different client-side signals, UNI ports must be
configured with corresponding attributes. For UNI ports, basic attributes and Layer
2 attributes must be configured.

 In the NE Explorer, click an NE and choose Configuration > Packet Configuration >
Interface Management > Ethernet Interface from the Function Tree. Then, you can
configure basic attributes, traffic control, Layer 2 attributes, Layer 3 attributes, and
advanced attributes for the Ethernet port.

 Click Basic Attributes to find the corresponding UNI port.

 Enable port: Enable.

 Port mode: Layer 2.

 Encapsulation type: 802.1Q.

 Laser status: Enabled.

 Click Apply.

 Click Layer 2 Attributes.

 Tag: Tag Aware.

 Click Apply.
 The NNI is a network-side port at the network layer and corresponds to a port on
the transmission network.

 For an ETH PWE3 service, pay attention to the basic attributes and Layer 3
attributes of the port.

 In the NE Explorer, click an NE and choose Configuration > Packet Configuration >
Interface Management > Ethernet Interface from the Function Tree.

 Click Basic Attributes to find the corresponding NNI port.

 Port Mode: Layer 3.

 Laser Status: Enabled.

 Click Apply.

 Click Layer 3 Attributes.

 Enable Tunnel: Enabled.

 Specify IP: Manually.

 IP address: 4001-10.0.0.5; 4002-10.0.0.9.

 IP mask: 30bits.

 Click Apply.
 Tunnels can be configured in two modes: Per-NE configuration and end-to-end
configuration. This section describes the end-to-end configuration.

 Choose Service > Tunnel > Create Tunnel from the main menu. The Create Tunnel
window is displayed.
 Configure the basic information about MPLS Tunnel.

 Tunnel Name: Any.

 Protocol Type: MPLS.

 Signaling Type: Static CR.

 Service Direction: Bidirectional.

 Protection Type: Protection-Free.


 Configure parameters for configuring a static tunnel.
 In the dialog box that is displayed, click Browse Service to view the configured
tunnel or choose Service > Tunnel > Tunnel Management from the main menu.
 Choose Service > PWE3 Service > Create PWE3 Service from the main menu.

 Service Type: ETH.

 Service ID: manual setting/automatic allocation.


 Click Configure Source and Sink Node. The dialog box for configuring the source
and sink of the PWE3 service is displayed.
 Configure PW basic attributes.

 Set source/sink IP.

 If a forward tunnel is selected, the reverse tunnel is automatically associated.

 Set PW label.
 In the dialog box that is displayed, click Browse Service to view the configured E-
Line services or choose Service >PWE3 Service > PWE3 Service Management from
the main menu.
 Before configuring the multipoint-to-multipoint E-LAN service, ensure that the
tunnels between nodes are normal.
 Choose Service > VPLS Service > Create VPLS Service from the main menu.

 Service name: Any.

 Signal Type: LDP/Static.

 Networking Mode: Full-Mesh VPLS.

 Service Type: Service VPLS.

 VSI Name/VSI ID: The value can be automatically allocated or manually specified.

 Click Add > NPE. The Configure VSI Service Node dialog box is displayed.
 Set NE13-N15 to a VSI node.
 Configure Tag and MAC address learning method.

 Configure split horizon group.


 In the dialog box that is displayed, click Browse Service to view the configured
VPLS service or choose Service > VPLS Service > VPLS Service Management from
the main menu.
 1+1 protection:

 In normal cases, the transmit end transmits services to the working tunnel
and protection tunnel, and the receive end receives services from the
working tunnel. When the working tunnel is faulty, the receive end receives
services from the protection tunnel.

 1:1 protection:

 In normal cases, services are transmitted in the working tunnel. The


protection tunnel is idle. When the working tunnel is faulty, services are
transmitted in the protection tunnel.

 Single-ended switching:

 Single-ended switching is unidirectional switching. That is, when the forward


or reverse working tunnel is faulty, only the service in the direction where the
fault occurs is switched to the protection tunnel.

 Dual-ended switching:

 Dual-ended switching is bidirectional switching. That is, when the forward or


reverse working tunnel is faulty, the services in both directions are switched
to the protection tunnel.
 When creating a tunnel trail, you can set Protection Type to 1:1 to configure a
tunnel APS protection group in an end-to-end manner.

 When configuring a protection group, you also need to enable the OAM status of
the tunnel.

 You can right-click an NE and choose from the shortcut menu to set the explicit
node of the working or protection trail.
 If the working and protection tunnels are available, you can configure tunnel APS
protection groups in end-to-end mode.

 Choose Service > Tunnel > Create Protection Group from the main menu. In the
Protection Group Configuration window, configure tunnel APS protection.
 1+1 protection:

 Normally, the transmit end transmits services to the working PW and


protection PW, and the receive end receives services from the working PW.
When the working PW is faulty, the receive end receives services from the
protection PW.

 1:1 protection:

 Normally, services are transmitted in the working PW. The protection PW is


idle. When the working PW is faulty, services are transmitted in the
protection PW.

 Single-ended switching:

 Single-ended switching is unidirectional switching. That is, when the forward


or reverse working PW is faulty, only the service in the direction where the
fault occurs is switched to the protection PW.

 Dual-ended switching:

 Dual-ended switching is bidirectional switching. That is, when the forward or


reverse working PW is faulty, the service in both directions is switched to the
protection PW.
 When creating an Ethernet PWE3 service in trail mode, you can set Protection Type
to PW APS Protection.
 Higher order modulation means high spectral efficiency. However, due to
Shannon's law, the transmission distance is shortened. Therefore, we must find a
balance in the middle.
 Key technologies of Multi-carrier:

 Nyquist spectrum compression technology improves spectral efficiency and


reduces the complexity of DSP.

 The photoelectric integration technology reduces the footprint, power


consumption, and costs.
 Huawei provides two 400G WDM-side solutions:

 The first solution is 2*200G, which is called 2SC-PDM-16QAM. Generally, this


solution is used in the area/MAN network and the transmission distance is
about 800 km. For each 400G channel, when the traditional fixed DWDM
solution is used, the wavelength supports the 100Ghz spacing. If the
spectrum compression and flex grid technologies are used, the channel
spacing of the 75Ghz can be supported.

 The second solution is 4*100G, which is called 4SC-PDM-QPSK. Generally, this


solution is used for long-distance transmission of backbone networks. The
transmission distance can reach 1200 km. For each 400G channel, when the
traditional fixed DWDM solution is used, the wavelength supports the
200Ghz spacing. If the spectrum compression and flex grid technologies are
used, the channel spacing of the 150Ghz can be supported.
 Huawei also provides two solutions for 1T:

 The first solution is 5*200G, which is called 5SC-PDM-16QAM. Generally, this


solution is used in the area/MAN network and the transmission distance is
about 800 km. For each 1T channel, when the traditional fixed DWDM
solution is used, the wavelength supports the 250Ghz spacing. If the
spectrum compression and flex grid technologies are used, the 187.5Ghz
channel spacing is supported.

 The second solution is 10*100G, which is called 10SC-PDM-QPSK. Generally,


this solution is used for long-distance transmission of backbone networks.
The transmission distance can reach 1200 km. For each 1T channel, when the
traditional fixed DWDM solution is used, the wavelength supports 500Ghz
spacing. If the spectrum compression and flex grid technologies are used, the
channel spacing of 375Ghz can be supported.
 For a typical fixed WDM system, the channel spacing is N*50 Ghz. For a WDM
system based on the flex grid optical platform, the channel spacing is N*12.5Ghz.

 In addition, the Flex Grid supports hybrid transmission of wavelengths of different


rates, including 10G/40G/100G/400G/1T…. Different rates occupy different channel
spacing and support the optimal network capacity.
 The evolution of the 400G/1T solution is divided into three phases. The first phase
is the existing fixed WDM system, which is based on the 50Ghz spacing. In the
second phase, the Flexible DWDM system uses the flex grid technology to support
the N*12.5Ghz channel spacing. Therefore, the spectrum efficiency is higher. Take
the 400G 2SC-PDM-16QAM solution as an example. If the existing fixed
wavelength system is used, the 400G channel spacing needs to be 100Ghz. If the
Flexible DWDM is used, the 400G channel spacing can support 75Ghz, and the
spectral efficiency is effectively improved. In the third phase of the future, the
spectral efficiency will be further improved and the transmission distance can be
further extended.
 The Noise Figure (NF) of the amplifier affects the receive OSNR of the system. The
NF of the Raman amplifier is far lower than that of the traditional EDFA. Therefore,
for a common fiber network, the Raman amplifier can significantly improve the
system OSNR and meet the requirements of a super 100G system.
 In the coherent network, the dispersion of the WDM transmission system is solved.
The key to restrict the transmission rate and distance is fiber attenuation and non-
linear. The transmission loss of each fiber span can be reduced by increasing the
effective cross-sectional area of the optical fiber and reducing the fiber
attenuation coefficient. In this context, G.654E optical fiber standards emerge.
Compared with the original G.654 optical fiber, the G.654E optical fiber standard
further standardizes the optical fiber performance specifications of high-speed
signals in terrestrial transmission.

 According to the test results of G.654E fibers from multiple vendors in the lab, the
transmission performance of G.654E fibers from all vendors are better than that of
G.652 fibers. For 400G services, the transmission distance of the G.654E is better
than that of the G.652 fiber.
 Huawei Super oDSP chip can not only compensate for large CD/PMD, but also
track the high-speed rSOP. It also adopts three leading technologies to improve
the transmission performance.

 On the transmit side, Huawei Super oDSP performs pre-shaping on high-


speed signals to reduce the transmission filtering penalty and increase the
transmission distance.

 The non-linear feature of the component causes signal degradation. Huawei


Super oDSP uses the adaptive correction algorithm to compensate for
component damage and improve signal quality.

 The receive end uses the advanced BICC FEC algorithm for error correction.
Compared with the traditional algorithm, the BICC FEC algorithm increases
the coding net gain to 11.8dB based on the collaborative analysis of multiple
adjacent data blocks to obtain a stronger error correction capability.

 Based on the Super oDSP chip, Huawei 100G can achieve 6000KM ultra-long haul
transmission. The 200G can achieve 1500KM long-distance transmission, and the
system capacity can reach 20T+.
 Different vendors use different FEC encoding and decoding modes. Therefore, the
FEC processing modes of different vendors cannot be interconnected. (Only
standard FEC in different vendors can interconnect with each other, which is also
called GFEC.)
 Huawei 100G SD-FEC soft decision algorithm has the following features:
 The innovative full-soft-decision FEC can achieve higher gain, higher
integration, and lower power consumption.
 The 100% soft decision is used, and the HD-FEC code is not cascaded. As a
result, the delay is greatly reduced.
 The unique cascade pipeline architecture greatly reduces the implementation
complexity of soft decision decoding.
 The innovative soft decision architecture can be used to implement flexible
solutions with different power consumption and performance.
 The soft-decision FEC uses the overhead of 20% or higher. In combination
with the spectrum compression technology at the transmit end, the
transmission cost of rate increase can be reduced while the transmission
bandwidth increases. In this way, the gain performance brought by the high
overhead soft decision FEC can be ensured.
 The soft decision FEC is combined with the unique DSP algorithm to provide
differentiated application scenarios.
 The goal of SDN is to separate the control layer from the forwarding layer,
enabling network openness, programmability, virtualization, and automation.

 SDN does not refer to a device or software, but a new network architecture or
network solution.
 Characteristics: Control and forwarding separation, centralized controller, network
openness and programmability, and forwarding plane abstraction.
 The SDN centralized management system virtualizes network bandwidth and
provides various differentiated bandwidth services (VTS) to improve VIP user
experience, increase customer loyalty, and bring value-added service revenue.
 The NCE can manage the physical and virtual network devices of routers, switches,
WDM, microwave, SDH, and access networks sold by the network product line.
However, the NCE does not manage the devices related to the wireless and cloud
core networks such as vEPC and vIMS, in addition, the NCE can interconnect with
third-party IP and optical domain controllers through the Super cross-domain
management and control module.

 In terms of functions, the NCE is positioned as the automation platform of the


cloud-based network. It needs to interconnect with the customer's service
orchestrator or OSS to streamline the process from the customer CRM/BSS to the
OSS and to the network.

 NCE control module is cloudified by the Agile Controller.

 NCE management unit is evolved from the U2000, but the NCE has a larger scope.

 NCE analysis unit is an integrated system that integrates management, control,


and analysis.

 Supports the connection to a third-party domain controller (Domain or Local


Controller).
 Although the NCE implements the function convergence of “management-
control-analysis” and cloudification, the reliability requirements of the
management and analysis modules are different from that of the control module,
sub-second level response is required for “control” module, second-level
response is required for “management” module. Therefore, control module can
be deployed nearby, the management module can be deployed nearby or
remotely based on the carrier's organization or network scale.

 Considering the preceding factors, when the network scale is small, the NCE can be
deployed as a whole in a data center that is close to the managed network. When
the network scale is large and the physical coverage distance exceeds 200 km (10
ms), and the delay between the control module and the controlled device is longer
than 10 ms, different software modules of the NCE need to be deployed
hierarchically, the control module must be close to the network, and the
management and analysis modules can be deployed remotely in a centralized
manner. In addition, even if the network scale does not exceed the limit of the NCE,
the hierarchical operation mode of the group + subnet and national center +
provincial center may be adopted due to the organization and process design of
the Carriers. In this case, the NCE must be deployed at different levels of data
centers to meet the requirements of different organizations for use and
maintenance.

 When the NCE control module is faulty, similar to the traditional graceful restart
(GR) solution is used, the forwarding plane and control plane of the device can
keep the current control behavior and forwarding behavior unchanged. After the
control module recovers, the management and data synchronization are
performed again, restore the data based on the control module.
 Data of a large network:

 It takes about three months to collect and analyze data for about 2300 NEs
on the entire network. Five engineers are required.

 The optimization workload is heavy. Therefore, the analysis and optimization


operations can be performed only once a year. In other time, network
deterioration cannot be handled in time.
 The OSNR detection based on the OD has the following features:

 Simple and convenient operation: The OSNR detection function is deeply


integrated with the NMS. The OSNR detection can be realized through the
operation of the NMS software. The virtual instrument graphical interface
displays the detection result without other auxiliary equipment or other
complicated operations.

 High detection precision: The detection precision is better than the


traditional 10G OSNR detection.

 Wide detection range: All site types, all wavelength rates including
10G/40G/100G, can achieve online detection of OSNR.

 Optical-layer performance O&M of WDM networks based on the OD:

 Centralized configuration of network-wide monitoring: Supports centralized


configuration of network optical-layer performance monitoring parameters,
greatly saving manpower.

 Automatic monitoring of optical-layer performance: No meter is required to


monitor the optical-layer performance of the entire network and
automatically detect the optical channel with abnormal performance.

 Automatic optimization of optical-layer performance: Based on the


performance data of each optical channel, the system intelligently adjusts the
channel power to ensure that each channel works in the optimal state and
ensures the optimal performance of the system.
 By evaluating the optical layer indicators such as the OTU BER/OTU input optical
power/OMS segment optical power flatness/single-wavelength nominal optical
power of the OA/OTS section EVOA rationality/Gain rationality, and fiber loss, you
can clearly identify the optical-layer problems on the network, prevent potential
service risks on the network.

 By evaluating the protection capability and protection risks, you can intuitively
display the protection configuration and the risks of working and protection routes
in the same subrack, co-board, and co-fiber.

 Provides special analysis on reliability problems, such as co-fiber, co-board, and


co-subrack, to provide information support for special rectification of reliability
problems.

 By analyzing port traffic, you can display the historical trend of port traffic and the
maximum, minimum, average, and current traffic usage of service ports in the
receive and transmit directions. This helps monitor the actual usage of port traffic
and optimize internal resources.
 When the reference clock source or the clock link fails, devices can select a new
clock source to trace if clock protection is configured. This ensures that the entire
network continues tracing the same reference clock. Clock protection can be
implemented in three ways: by disabling the SSM protocol, by enabling the
standard SSM protocol, and by enabling the extended SSM protocol.

 Definition of clock synchronization:

 SDH clock synchronization is a physical-layer frequency synchronization


technology that can be supported by all line boards. The system can extract
clock signals from SDH signals and send the clock signals to each board to
transmit clock information.

 Purpose of Clock Synchronization:

 Clock synchronization ensures that all digital devices in the same


communication network work at the same nominal frequency and minimizes
the damage caused by slip, burst bit error, phase jump, jitter, and drift, in
addition, the pointer justification of SDH equipment can be minimized, which
is the prerequisite and basis for normal network operation.

 PRC: Primary Reference Clock.


 The synchronization status information uses 4-bit encoding and 16 types of signals,
which are used to transmit the timing signal level on the synchronous timing link.
In this way, the clocks of the NEs in the synchronization network can obtain the
clock synchronization status information of the upstream NE by SSM, perform
operations (tracing, switching, or holding) on the clock of the NE according to the
information, and transmit the clock synchronization status information of the NE
to the downstream NE.

 Huawei transmission equipment supports two types of SSM protocols: Extended


SSM protocol and standard SSM protocol. The differences between the two
protocols are as follows:

 The standard SSM does not support the clock source ID.

 The extended SSM supports the setting of the clock source ID.
 Extended SSM Protocol

 The clock source ID of a synchronization source is transmitted together with


SSM bytes. The clock source IDs and SSM bytes together determine
automatic clock switching. A clock source ID identifies whether the clock is
from the local NE. If the clock is from the local NE, the clock source is
considered invalid to prevent timing loops. The extended SSM protocol is
mainly used for interconnection of Huawei transmission devices.
 In an SDH network, the external clock node extracts the reference timing from the
BITS device, writes the SSM into the 5~8 bit of the S1 byte, and then transmits the
S1 byte to the downstream node to complete the SSM output. After extracting
timing signals from the line signals, the downstream node obtains the
synchronization quality level from the 5~8 bit of the S1 byte. In this way, the
downstream node determines whether the current clock source is valid in real time
and returns the 0xf bit 0xf of the S1 byte of the upstream node, indicates that the
returned clock source is unavailable. this prevents synchronization between two
nodes.

 To set clock protection, you need to enable the SSM protocol for all NEs in the
network. If the SSM protocol is not enabled, the NE cannot extract clock quality
information and cannot determine whether the quality of the current clock source
changes. In this way, the NE determines whether to switch to another clock source
to implement clock protection.
 For the last four bits of the S1 byte, there are 0x00~0x0f 16 values. The SSM
requires only six quality levels, and other values are reserved for future
applications. The smaller the SSM value, the higher the quality of the clock source.
 ITU-T only defines the last four bits of S1 byte, and Huawei deploys the first four
bits of the S1 bytes, hence, the extended SSM comes. While enabling the clock
source ID, it means the extended SSM is used. The extended SSM is only
supported by Huawei, when connected with other vender’s equipment, only the
standard SSM can be involved.

 Since the equipment has to process all bits in the S1 byte to judge whether to
implement the clock protection, software processing in the SDH equipment is a bit
complex. Most of the venders prefer to divide the network into parts, and
introduce the fake tracing phenomena. The synchronization problem will be
settled by the pointer justification and hold-over mode of the NEs.
 Answer 1:

 The extended SSM can support clock source ID setting.

 A clock source ID is used to distinguish the clock information between local


and other nodes to prevent a node from tracing the clock signal that is
locally transmitted and comes from the negative direction. Hence, a timing
loop is prevented.

 Answer 2:

 After the SSM protocol is enabled, the SDH equipment automatically


determines the quality level of each clock source and selects the clock source
with the best quality to trace the clock source. In this manner, the clock
protection switching is performed when a fault occurs on the network.

 The S1 byte can transmit the SSM information on the SDH network so that
each NE can receive the clock quality level information.

 The clock source ID can prevent clock loops.


 The node that extracts the timing signals of the upstream station from the line
returns the 0f information at the same time. Each node extracts timing information
from the configured clock source and obtains the synchronization quality
information. The synchronization source with higher quality is preferentially traced.
The synchronization source with the highest priority is traced by the
synchronization source with the same quality.

 The meanings of the numbers in the figure are as follows:

 02: 2 indicates the clock quality level. In this case, the clock is G.811 level.
When the standard SSM protocol is used, the first four bits of the S1 byte are
Ox0.

 0f: the clock signals are unavailable.

 NE1 is connected to the external clock equipment (BITS), and BITS provides the
G.811 clock. If the BITS outputs 2Mbit/s clock signals, NE1 can directly extract the
SSM information and trace the clock. NE1 transmits the corresponding clock
information (02) to the downstream through the S1 byte. NE2 traces the clock
signals from NE1, sends the quality level of the clock source to the downstream
station, and returns a 0f message to NE1. The processing methods for other NEs
are the same.

 BITS:Building Integrated Timing System.


 NE 1: External clock source/Internal clock source

 External clock source is the highest level clock source.

 Internal clock source is the secondary level clock source.

 In normal status, NE 1 traces the External clock source. When the external
clock source is unavailable, NE 1 will trace the Internal clock source.

 West clock source/East clock source corresponds to west line board/east line
board.

 When setting the priority of the clock source, consider the impact of the length of
the SDH clock tracing reference chain on the clock deterioration. For a ring
network with less than six nodes, you can trace the timing signals from one
direction on the entire network. If there are many nodes on the ring network, you
are advised to set two clock tracing chains to achieve network-wide
synchronization.
 The fiber is cut between NE 3 and NE 4. NE4 cannot receive the timing signals
from NE3 and transiently becomes unavailable (ff). Then, NE4 enters the holdover
mode and inserts the S1 byte to the downstream node as 0b. In the west direction
of NE5, the value of S1 is 0b, and the value of east is 0f. Therefore, NE5 traces the
west clock source and sends the S1 byte with the value of 0b to the downstream
station.

 In this case, NE6 determines that the clock tracing is switched to the east clock
source. The S1 byte inserted in the west line is 02 and continues to be transmitted
to NE4. The entire network enters the synchronous state again.

 0b: Synchronous equipment timing source (SETS).


 In this case, the clock tracing is in two directions:

 NE1→NE2→NE3

 NE1→NE6→NE5→NE4
 When the BITS fails, NE1 inserts the S1 byte 0b to each downstream node. Then,
each node on the network traces the S1 byte.
 NE 1: External clock source/East clock source/Internal clock source.

 NE 2/NE 3: West clock source/East clock source/Internal clock source.

 NE 4/NE 5: East clock source/West clock source/Internal clock source.

 NE 6: External clock source/West clock source/Internal clock source.

 In normal cases, the clock of NE1/NE2/NE3 is from BITS (1), and the clock of
NE4/NE5/NE6 is from BITS (2).
 When a fiber cut occurs between NE2 and NE3, NE3 cannot receive clock signals
from NE2 and enters the holdover mode. In addition, NE3 inserts the S1 byte to
the downstream node as 0b.
 NE3 undergoes a switching and receives clock signals from BITS (2).
 When BITS (1) fails, the clock tracing relationship of each NE is shown in the figure.
NE1 and NE2 receive clock signals from the east.

 In the case of a ring topology, can the standard SSM protocol be used for the
dual-BITS configuration mode?
 The external clock connected to NE1 is G.811 and works as the working one. And
the external clock connected to NE4 is SSU-A, and works as the standby one.

 All NEs enable the standard SSM protocol.


 The external BITS of NE1 is G.811 clock, and the external clock of NE4 is SSU-A.

 All the nodes start S1 byte.


 When the main BITS fails, NE1 enters the holdover state and sends the 0b clock
information. NE2 and NE3 trace the clock signals, but there are residual 0b on the
line between NE4 and NE5. Therefore, NE1 traces the timing signals sent by NE6.
 After the main BITS clock is lost, NE1 traces the clock signals from NE6. Finally, the
timing network is in the interlock state. To solve the clock interlock problem, you
can enable the extended SSM protocol.
 Two methods to solve the clock loop problem of the standard SSM:

 Enable the extended SSM protocol and allocate clock source IDs to the clock
source to solve the clock loop problem.

 Avoid loops during manual configuration.


 The meanings of the numbers in the figure are as follows:

 12: 1 indicates the clock source ID. 2 indicates the clock quality level. In this
case, the clock is G.811 level. When the standard SSM protocol is used, the
first four bits of the S1 byte are Ox0.

 0f: the clock signals are unavailable.

 NE1 is connected to the external clock equipment (BITS), and BITS provides the
G.811 clock. If the BITS outputs 2Mbit/s clock signals, NE1 can directly extract the
SSM information and trace the clock. NE1 transmits the corresponding clock
information (12) to the downstream through the S1 byte. NE2 traces the clock
signals from NE1, sends the quality level of the clock source to the downstream
station, and returns a 0f message to NE1. The processing methods for other NEs
are the same.
 When the fiber between NE3 and NE4 is cut, NE4 cannot receive the timing signals
from NE3. As a result, NE4 enters the holdover mode and inserts the S1 byte 0b to
the downstream node. In the west direction of NE5, the S1 value is 0b and the east
value is 0f. Therefore, NE5 also changes to the holdover mode and sends the S1
byte 0b to the downstream station.

 In this case, NE6 determines that the clock tracing is switched to the east clock
source. The S1 byte inserted in the west line is 12 and continues to be transmitted
to NE4. The entire network enters the synchronous state again.
 When the switching state is stable, there are 2 clock links:

 NE1→NE2→NE3.

 NE1→NE6→NE5→NE4.
 When the BITS fails, NE1 enters the holdover state and sends the S1 byte with the
value of 2b (b is the quality level of the internal clock source). Other NEs select the
clock source for tracing. Then, the network enters the stable state.
 Normally, the entire network trace the clock signals from the main BITS.

 NE1 transmits the corresponding clock information (12) to the downstream


through the S1 byte.
 The external BITS of NE1 is G.811. The external clock source ID of NE1 is 1, and the
internal clock source ID of NE1 is 3.

 The external BITS of NE4 is SSU-A. The external clock source ID of NE1 is 2, and the
internal clock source ID of NE1 is 4.
 When the main BITS fails, NE1 enters the holdover state and sends 3b clock
information. NE2 and NE3 trace the clock signals, but there are still 12 clock
information between NE4 and NE5. Because now the clock source ID is 1, the
signal cannot be received from NE5 to NE6 and from NE6 to NE1. So, a timing
loop is not formed.
 In this case, NE4 determines the clock source. According to the previous slide, the
S1 byte from west to NE4 is 3b and the east clock is 0f. According to the clock
protection switching principle, NE4 compares the quality level of each clock source,
the quality level of the standby BITS connected to the is SSU-A higher than that of
the line clock from the west. Therefore, NE4 switches to trace the clock signals of
the standby BITS and sends the S1 byte with the value of 24.
 Comparing with other clock sources’ quality, all the NEs will trace the clock signal
from standby BITS.
 Assume that the external clocks of NE1 and NE4 are G.811. The SSM protocol is
enabled on all nodes on the network. The clock source ID of the main BITS is 1, the
clock source ID of standby BITS is 2, the internal clock source ID of NE1 is 3, and
the internal clock source ID of NE4 is 4. Set the tracing level of the clock source as
follows:
 NE1: External clock source/West clock source/East clock source/Internal clock
source.
 NE4: West clock source/East clock source/External clock source/Internal clock
source.
 Other NEs: West clock source/East clock source/Internal clock source.
 When a fiber cut occurs, NE3 enters the holdover state and outputs 0b. After
receiving the S1 byte from NE3, NE4 determines the clock quality. The west is 0b,
the east is 0f, and the standby BITS clock source ID is 2. The clock quality level of
the standby BITS is the highest, therefore, NE4 traces the standby BITS and sends
S1 byte 22 to the downstream. NE5 and NE6 trace the clock signals from NE4 in
sequence. However, the clock signals are not traced by NE1 because the clock
quality levels of the active and standby BITSs are the same. NE1 selects the clock
source according to the priority of the clock source. The priority of the main BITS is
higher than that of the west clock source. Therefore, NE1 traces the main BITS, in
addition, NE2 traces the clock signals from NE1. In this case, NE1 and NE2 trace
the active BITS, and NE4, NE5, and NE6 trace the standby BITS. The entire network
is in the pseudo synchronization state.
 Normally, NE5 and NE6 send S1 byte 12 to the west from the east and west clock
sources. When a fiber cut occurs, the S1 byte from the east to NE4 is 12 and the 0b
from the west to NE4. Although the level of the standby BITS is 2, NE4 traces the
clock signals from the east because the east clock source has a higher priority, and
send S1 byte 12 to the downstream, so that NE3 traces it. Finally, the entire
network traces the main BITS.

 The second solution: The clock priority table is not changed, but the S1 byte of the
standby BITS is manually set on NE4 to reduce the quality level.

 You can manually set the clock source level on the U2000. Therefore, you can
change the priority of the standby BITS to be lower than that of the master
BITS. When a fiber cut occurs, NE6 traces the east clock source because the
quality level of the main BITS is high, forward the S1 byte 12 to the
downstream station. NE5 and NE4 trace the east clock source in sequence,
and then all the NEs will trace the clock signals of the main BITS.
 NE1: External clock source/Internal clock source.

 NE2: West clock source/East clock source/Internal clock source.

 NE3: West clock source/East clock source/Internal clock source.

 NE4: West clock source/Internal clock source.


 Allocate a Clock Source ID to the line clock source of the node that enters into
another ring network from one chain or ring network when the line clock source
exists.

 NE1: West clock source/East clock source/Internal clock source.

 NE2: West clock source/East clock source/Internal clock source.

 NE3: West clock source/Internal clock source.

 NE4: External clock source/Internal clock source.


 The external BITS of NE1 is G.811 clock, the external clock source ID is 1, and the
internal clock source ID is 2.

 The external BITS of NE4 is SSU-A, the external clock source ID is 3, and the
internal clock source ID is 4.

 The W1 clock source ID of NE3 is 5, and the internal clock source ID is 6.


 When the main BITS fails, NE1 enters the holdover state and sends S1 byte 2b. NE2
and NE3 trace the clock signals in sequence. NE3 traces the clock signals and
sends 2b to NE4. NE4 determines that the quality level of the standby BITS clock is
higher, therefore, NE4 switches to the standby BITS for tracing and outputs the S1
byte 34. Then, NE3 selects the clock source to trace and outputs the S1 byte 54
(the W1 clock source ID is 5. NE3 modifies the clock source ID of the received clock
source and continues to send the next node), NE1 and NE2 choose to trace the
clock signals of the standby BITS and synchronize the clock signals on the entire
network.
 The clock source ID of the west clock source 2 of NE3 is set. As a result, a timing
loop is not formed.

 When both the master and standby BITSs fail, NE4 enters the holdover state and
sends S1 byte 4b. NE3 traces the clock signals and sends S1 byte 5b to other NEs.
Other NEs trace the clock signals, the entire network is synchronized.
 If the clock source ID of the west line clock source 2 of NE3 is not set, a timing
loop occurs.
 If main BITS and standby BITS are the same quality level, clock tracing link need to
trace main BITS from two direction. The principle of setting the clock priority table
is that the standby BITS must be the last node in a chain. To prevent partial links
from being interrupted, pseudo synchronization occurs on the entire network.
 For various networking modes of the transmission equipment, the OptiX
transmission equipment of Huawei provides multiple DCN solutions. In this course,
we mainly introduce HWECC solution.

 HWECC solution:

 In this solution, NEs transmit the data that supports the HWECC protocol
through DCCs. The solution features easy configuration and convenient
application. As the HWECC protocol is a proprietary protocol, the
management problems cannot be solved when the networking is comprised
of both the OptiX equipment and the third-party equipment.
 The HWECC protocol stack is a proprietary protocol stack of Huawei. It is the most
applicable and advanced ECC communication solution for Huawei transmission
equipment. The HWECC protocol stack identifies NEs by IDs and creates routes
automatically, which is easy to use.

 ITU-T G.784 defines the architecture of the ECC protocol stack based on the OSI
seven-layer reference model. The HWECC protocol stack is based on the ECC
protocol stack.

 In the HWECC protocol stack, the NE address used by each layer is the ID of the
NE. The NE ID has 24 bits. The highest eight bits represent the subnet ID (or the
extended ID) and the lowest 16 bits represent the basic ID. For example, if the ID
of an NE is 0x090001, the subnet ID of the NE is 9 and the basic ID is 1.

 The main function of the physical layer is to control physical channels. The physical
layer performs the following functions:

 Receives and sends data over the physical channels, including receiving data
from physical channels and transferring the data to the upper layer.

 Receives the data frames transferred from the upper layer and sends them to
physical channels.

 The channels at the physical layer include DCC channels and extended ECC
channels. The physical layer can process the data frame with a maximum of
1024 bytes.
 The physical layer of the ECC is DCC, whose data is transmitted based on the fiber.
In certain cases, the network or NE may be independent and there is no DCC
channel to the gateway NE (no fiber connection). The extended ECC refers to the
ECC protocol stack that is loaded on the TCP/IP protocol stack. That is, the HWECC
protocol stack is carried through the extended channel (such as Ethernet) instead
of the DCC channel to meet the requirements of special scenarios. The difference
between the extended ECC and the ECC is that the physical layer of the ECC is the
DCC channel and that of the extended ECC is an extended channel (such as
Ethernet channel).

 HWECC uses the D1-D3 bytes as the physical transmission path. The D4-D12 or
D1-D12 bytes can also be used.

 HWECC supports the communication by using fibers or Ethernet cables. When no


optical path is available between nodes, set the extended ECC by using Ethernet
cables.
 The HWECC solution adopts the shortest path first algorithm to establish ECC
routes. In this context, the shortest path refers to the path with minimum number
of stations.
 The following describes how an NE establishes ECC routes:
 The physical layer of an NE maintains the status information of the DCC to
which each line port corresponds.
 The MAC layer of the NE establishes the MAC connection between the NE
and the adjacent NE.
 The NE broadcasts the connection request frame (MAC_REQ) to the adjacent
NE in a periodical manner.
 After receiving the MAC_REQ, the adjacent NE returns the connection
response frame (MAC_RSP).
 If the MAC_RSP is received within the specified time, the NE establishes a
MAC connection between itself and the adjacent NE.
 The NET layer of the NE establishes the NET layer routing table.
 According to the status of the MAC connection, the NE establishes an initial
NET layer routing table.
 The NE broadcasts its routing table to the adjacent NE in a periodical manner
through the routing response message.
 The adjacent NE updates its NET layer routing table according to the received
routing response message and the shortest path first algorithm.
 At the next route broadcasting time, the NE broadcasts its current NET layer
routing table to the adjacent NE.
 The implementation principle is as follows:

 The U2000 transfers application layer messages to the gateway NE through


the TCP connection between them.

 The gateway NE extracts the messages from the TCP/IP protocol stack and
reports the messages to the application layer.

 The application layer of the gateway NE queries the address of the


destination NE in the messages. If the address of the destination NE is not
the same as the address of the local station, the gateway NE queries the core
routing table of the network layer according to the address of the destination
NE to obtain the corresponding route and the communication protocol stack
of the transfer NE.

 After receiving the packet that encapsulates the messages, the network layer
of the transfer NE queries the address of the destination NE of the packet. If
the address of the destination NE is not the same as the address of the local
station, the transfer NE queries the network layer routing table according to
the address of the destination NE to obtain the corresponding route and
then transfers the packet.

 After receiving the packet, the network layer of the destination NE reports
the packet to the application layer through the Layer 4 because the address
of the destination NE of the packet is the same as the address of the local
station.
 Here the shortest path means minimum hops, it isn’t the physical distance.

 ECC routing can be discovered automatically, it doesn’t need to add any manual
route in normal conditions.
 The networking capability restriction refers to the restriction on the number of the
NEs that are interconnected through the DCC (or extended ECC). That is, the ECC
networking restriction is actually the number of NEs that are managed by a
gateway NE.

 Considering various factors above, combining the network experience and


referring to the industrial handling method, the OptiX should and must properly
plan the ECC networking, thus avoiding the great impacts of the over huge
network on the normal maintenance and secure running of the network.

 The method for checking whether the ECC between the ECC subnets is closed: Log
in to the gateway NE. Use U2000 to query the routing table and check whether any
NE of other subnet exists. Instead, use the NM system to query it.
 Answer:

 If the ECC network planning is improper, the possible causes are as follows:

 The NE becomes unreachable to the NMS, affecting monitoring, maintenance,


and management.

 The channel is blocked. As a result, alarms are lost or delayed, and service
configuration and data uploading and downloading are affected.

 The host may reset abnormally, affecting services.

 The software loading efficiency is affected.


 Auto extended ECC

 Under the mode of auto extended ECC, the ECC can be connected by
connecting Ethernet interface of two NE with crossover cable (or straight
through cable via HUB), and without necessary to designate Server or Client.

 Manual extended ECC

 Under the mode of manual extended ECC, it is necessary to set one NE as


Server (Generally the nearest NE to gateway NE), the other NE are set to
Client.

 If there are only two NEs, we can use the crossover cable to connect them. If
there are many NEs, it is suggested to connect all of NEs to HUB by straight
through cable.

 Server and Client

 Actually, the connection established by Extended ECC is a TCP connection.


Once Server IP of Client is set, a connection request will be sent to Server.
When Server receives a request, it will establish a connection with Client.
Therefore, the concepts of Server and Client are similar to that of common
TCP/IP Server and Client.

 Maybe some new product can be set as both Server and Client, in this case,
the ports of Client and Server are different.
 Configure the extended ECC of auto mode.

 Right click on the NE. Choose NE Explorer.

 Click the Communication in NE Explorer then choose ECC Management.

 Choose the Auto Mode for ECC Extended Mode and Apply it.
 Under the mode of manual extended ECC, it is necessary to set one NE as Server
(Generally the nearest NE to gateway NE), the other NE are set to Client.

 Configuration of manual extended ECC mode:

 Choose Specified mode in the window then input the port number of
server and apply it.

 The range of port number is from 1601 to 1699 and the port number of
client should match to the server.

 NE can only be set to client or server.


 Under the mode of manual extended ECC, it is necessary to set one NE as Server
(Generally the nearest NE to gateway NE), the other NE are set to Client.

 Configuration of manual extended ECC mode:

 Choose Specified mode in the window then input the IP address and port
number of server and apply it.

 The IP address of the client is as same IP address of the NE.

 The range of port number is from 1601 to 1699 and the port number of
client should match to the server.
 DCC communication function can be enabled or disabled on U2000 based on
configuration requirement.

 This function should be used carefully because it maybe affects the normal ECC
communication.
 Choose DCC management in communication menu of NE explorer.

 Select the specific DCC channel which needs to be disabled and then disable it.

 Note:

 Disable here that means disable all of DCC channels including D1-D3 and D4-
D12, for instance, system will disable D4-D12 automatically when you just
disable D1-D3.
 Right click on the specific DCC channel that you want to delete and then select
disable to delete it.

 This operation doesn’t affect other DCC path, for instance, the system doesn’t
delete the D4-D12 channel when you just deletes the D1-D12 channel of a SDH
interface.

 The DCC channel can be recovered by creating a new one after deleting.
 An SDH network may consist of equipment from different vendors. As a result,
other vendors’ equipment may be separated from each other by the equipment
from Huawei. As the SDH equipment supplied by Huawei use a different ECC path
or ECC protocol than that used by other vendors’ equipment, other vendors’
equipment will fail to communicate with each other over the ECCs.

 Transparent transmission between the same D bytes on two boards:

 This function transparently passes the designated overhead bytes from an


optical board to the same overhead bytes of another optical board.

 Transparent transmission between different D bytes of two boards (cross-connect):

 This function cross-connects the designated overhead bytes on one optical


board with other overhead bytes on another optical board.
 Before configuring transparent transmission, you can delete the DCC channel (D1 –
D3) in the NE Explorer to release DCC resources.

 The default DCC bytes are sent to the control board through the backplane
bus for interpretation and termination. In this step, the D1-D3 bytes are not
terminated on the control board. After the transparent transmission is
configured, pass-through is performed directly.
 Choose DCC Transparent Transmission Management.

 Click New. The Create DCC Transparent Transmission Byte dialog box is displayed.
(Only one byte can be transparently transmitted between two ports at a time.)

 Set the parameters related to transparent transmission of overhead bytes, as


shown in the slide.

 Click OK. In the Operation Result dialog box, click Close.


 In the case of a third-party network, if the third-party network does not support
the DCC transparent transmission function, the external clock interface can be
used to transparently transmit the DCC function so that the NMS can manage the
network of Huawei equipment in a unified manner.

 If the third-party transmission network does not exist, you can use cables to
directly connect the external clock ports of NE1 and NE2 to transparently transmit
the DCC information.
 Configure transparent transmission of DCC bytes on the external clock interface of
NE1.

 In the NE Explorer, choose Communication > DCC Management from the


Function Tree.

 Click DCC Rate Configuration and set Enable Status of the external clock port
to Enabled. Then, click Apply.
 Normally OptiX equipment uses D1~D3 bytes for ECC, DCC expansion increases
DCC rates from 192kb/s to 576kb/s, this can solve the problem of ECC
communication bandwidth bottle neck in certain span of network.
 Configuration procedure:

 Select the corresponding NE. In the NE Explorer, choose DCC Resource


Allocation from the DCC Management drop-down list. Check whether the
D4-D12 mode is allocated. If not, set the corresponding byte mode.

 In the DCC Management area, select DCC Rate Configuration.

 In the Rate Type column, change the rates of D1-D3 to D4-D12. Click Apply.
 Function:

 When the network is in synchronous mode, the pointer is used to perform


phase calibration between synchronization signals.

 When the network loses synchronization (that is, in quasi-synchronous mode),


the pointer is used for frequency and phase calibration. When the network is
in asynchronous mode, the pointer is used for frequency tracking calibration.

 The pointer can also be used to accommodate the frequency jitter and drift
in the network.
 Row 4 of columns 1 to 9 x N in the STM-N frame is available for AU-PTR. It is to
specify the location of the J1, the first byte of VC4 in the AU-4 frame, so that the
receiver end conveniently separate each VC4.

 The AU-PTR is composed of 9 bytes H1YYH2FFH3H3H3, The pointer is contained


in H1 and H2, in which the last 10 bits designates the location of the byte where
the VC4 begins. Three H3 bytes stand for one pointer justification opportunity unit
- cargo units.

 For the convenience of locating each byte in VC4 (actually, each cargo unit), each
cargo unit is granted with a location value in the location of AU-4 payload, as
shown in the Figure. The location value sets the three-byte unit following the H3
byte to Location 0, and so on. Thus an AU-4 payload area has 261* 9/3 = 783
locations. Admittedly, the AU-PTR shall be in the range of 0 to 782, otherwise it is
an invalid pointer value.
 If the frame rate of the VC4 is faster than that of the AU-4, i.e. the package rate of
AU-4 is lower than the loading rate of the VC4, then the time for loading a VC4
(cargo) is less than 125us (the stopping time of the truck). The VC4 will be
continuously loaded before the truck leaves. However, the cargo box of the truck
(the information payload area of AU-4) is already full and unable to accommodate
more cargoes. At that time, the three H3 bytes (one justification opportunity) are
used to accommodate the cargoes. These three H3 bytes are like a backup space
temporarily added to the truck. Then the location of the all cargoes will be
displaced forward by one unit (three bytes), so that more cargoes [one VC4 plus 3
bytes] can be added into the AU-4. Thus the location of each cargo unit (one unit
includes 3 bytes) will be changed.

 This justification method is called negative justification. In this case the AUPTR
value is 521. The three H3 bytes are called negative justification opportunity. At
that time, the three H3 bytes are filled with VC4 payload. Via this justification
method, the first three bytes of the VC4 of the next truck are loaded on current
truck.
 If the frame rate of the VC4 is lower than that of the AU-4, i.e. a VC4 can't be
completely loaded during the stopping time of the AU-4 "truck", then the last
three bytes ---- cargo unit of the VC4 shall be transported by the next truck. Since
the AU-4 hasn't been filled with a complete VC4 (lack of a 3-byte unit), the cargo
box has an empty space of 3 bytes. To prevent the cargoes from straggle during
transmission due to the empty space within the cargo box, three stuffing bytes are
required to be inserted immediately after the three H3 bytes. Then all the 3-byte
units within the VC4 are required to displace afterward by one unit (3 bytes). Thus
the position of these cargo units will be changed.

 This justification method is called positive justification. The corresponding position


of the three inserted H3 bytes is called positive justification opportunity. If the rate
of the VC4 is much lower than that of the AU-4, more than one positive
justification unit (3 bytes) will be required to insert into the AU-4 payload area.
Note that there is only one negative justification opportunity (3 H3 bytes). And the
negative justification opportunity is located within the AU-PTR while the positive
justification opportunity is located within the payload area of the AU-4.
 From the perspective of the entire network, because the clocks of A and B are not
synchronized, the west line board of A generates a positive AU pointer justification.
In addition, because the 2 Mbit/s services on A are lower order cross-connections,
the AU pointer justification is converted into the TU pointer justification, the
justification is reported by the tributary board. The west line board of NE B
generates a negative AU pointer justification event. Because the service on NE B is
a VC-4 higher-order pass-through service, the AU pointer negative adjustment
indication signal is generated by the east line board and transmitted to NE C. The
west line board on NE C reports a negative AU pointer justification event. Because
the 2 Mbit/s service on NE C is a lower order cross-connection, the positive
adjustment of the AU pointer is converted into a positive TU pointer justification,
and the TU pointer justification is reported by the tributary board.
 As can be seen from the above analysis, NE1 and NE2 are asynchronous; NE2 will
generate TU pointer justification, instead of AU pointer justification.

 The services of slots 33~63 will be pass-through to NE3 along with the information
of TU pointer justification. At this time, NE3 will report the TU pointer justification.

 Is there any AU pointer justification reported?

 No. The AU pointer justifications have already transformed to TU pointer


justifications, they have been terminated.
 When the service configuration is completely pass-through in VC12 level, the
service transmission is shown in the figure.

 If NE1 and NE2 are asynchronous, NE2 will not generate either AU pointer
justification or TU pointer justification because there is no add/drop at NE2.

 Only when the service reaches the destination NE3, the TU pointer justification will
be auto-reported.
 Currently, the SDH equipment adopts the remote detection method.
 When the pointer justification events occur, the system is not affected.

 If the pointer justification event occurs frequently, you need to find out the causes
and take proper measures to ensure that the system runs stably.
 Provide clock compensation for long clock chain: the number of G.812 slave clocks
in a transmission link should not be more than 10; the number of G.813 clocks
between two G.812 slave clocks should not be more than 20; the number of G.813
clocks between the G.811 and G.812 slave clocks should not be more than 20; the
number of G.813 slave clocks should not be more than 60.
 Alarms on the U2000: TEMP_ALARM, TEMP_OVER.
 The SDH device has high requirements on the external clock source, the external
clock source must meet at least the requirements in the G.813 of SDH
synchronization quality level. If the precision of the external clock source is over-
low or the clock quality deteriorates, pointer justification occurs on the entire
network.
 According to the pointer justification concepts described previously, it is found
that in two ideal networking and configuration conditions, sites where the AU
pointer justification is generated can be located directly according to the reported
pointer justification performance events.

 Case 1:

 As shown in the figure, Site 1 is the central service site that has 2M services
with other sites. There is no service between the other sites. The clock of site
1 is in the free run mode, while other sites trace the clock of site 1 westward.

 In this case, it is found from the analysis that along the clock trace direction,
the first site reporting TU pointer justification event is the site generating AU
pointer justification (i.e., the site where the clock is not synchronized),
regardless of the central service site. Based on this, it can be judged that the
fault occurs to this site or the previous site. Please further locate and remove
the fault by card interchanging or configuration modification.
 Case 2:

 As shown in the figure, along the clock trace direction, there is a 2M service
between the source clock reference site and the end clock trace site. The VC4
timeslot where the service is located passes through other intermediate sites
in the VC4 level. In this case, along the clock trace direction, observe the
service from the source clock reference site to the end clock trace site. If the
line card at a site is the first one to report the AU pointer justification
performance event (not considering the source clock reference site), it
indicates that the pointer justification has occurred to the previous site.

 If the west optical card at Site 4 is the first one to report the AU pointer
justification performance event, it can be judged that the pointer justification
has occurred to Site 3 (i.e., the clocks between Site 2 and Site 3 are not
synchronized), possibly some fault has occurred to the west line card, clock
card and cross-connect of Site 3 or the west line card of Site 2.
 SETS is the short of Synchronous Equipment Timing Source.

 “W/E/SETS” here refers to the priority of clock source form the West line unit is
higher than that of East; the priority of the clock source from the East line unit is
higher than SETS.
 As shown in the following figure, In the whole network there are big amount of
pointer justifications. site 1 has big amount of TU negative justifications. site 2, 3
and 4 have amount of positive and negative AU and TU pointer justifications.

 Case processing steps:

 Check the fiber connection of site, no problem.

 Check the clock configuration on the NMS, no problem.

 For the anti-clockwise direction, AUPJCHIGH reported in the west of site 3


and site 4 indicates the clock of site 2 is faster than that of site 1.

 For the clockwise direction, AUPJCHIGH reported in the east of site 3 and site
4 indicates the clock of site4 is also faster than that site 1.

 Because site 3 and site 4 are both tracing from their west LU, that is from site
2, we can conclude site 2 doesn’t synchronize with site 1 probably. Maybe
the east optical card of site 1, clock unit or west optical card of site 2 are
faulty.
 Switch the clock tracing direction of site 2, 3 and 4 to eastward; we found that only
site 2 and 1 had TU pointers, showing that site 2 cannot synchronize with site 1
and site 3 and 4 synchronize with site 1.

 No matter which direction to trace, site 2 cannot synchronize with site 1. Most
probably the clock card of site 2 is faulty.

 While replacing the CXL board of site 2, we found the handle bar is hot. Checking
the fan, and clean the filter after a while the pointers disappeared.

 The AU pointer justifications are all caused by the faulty clock card of site 2.

 It also indicates to clean the fan filter during routine maintenance is very important.
 Answer: C.
MSTP Technology Topic -
PCM Technology

www.huawei.com

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved.


Objectives
 Upon completion of this course, you will be able to:
 Describe the basic principles of PCM

 Understand different PCM application scenarios

 Describes the functions and features of the PCM board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 2
Contents
1. PCM Technology Introduction

2. PCM Principles and Application

3. PCM Boards

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 3
PCM Technology Introduction (1)
 Before the telephone was invented, people passed the message through
the postman.

 In the middle of the 19th century, after telephone generation, people


realized the transmission of information by using metal wires to transmit
analog signals.

 After the middle of the 20th century, with the maturity of optical fiber
technology, people realized the transmission of information by using
optical fiber to transmit digital signals.

 How can digital signals be generated?

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 4
PCM Technology Introduction (2)
 A new signal generated by sampling, quantizing and encoding a
continuously changing analog signal is a digital signal.

Analog Signal Sampling Quantizing Encoding

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 5
PCM Technology Introduction (3)
 Concept: Pulse Code Modulation

 Principle: In an optical fiber communication system, a binary optical pulse


"0" code and a "1" code are transmitted in an optical fiber, and the optical
signal is modulated by using a binary pulse code modulation digital signal
to perform connectivity modulation on the light source. Digital signals are
generated by sampling, quantizing, and encoding continuous analog
signals, that is, PCM.

 The current digital transmission system uses the pulse code modulation
technology.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 6
PCM Solutions in the Industry
 PCM devices are separated from SDH devices.
Telephone
Telephone

Fax machine SDH Network


Fax machine

Computer
Computer

PCM device SDH device SDH device PCM device


Switch
Switch

 Existing problems:
 A large quantity of devices occupies much space of an equipment room.

 Network connections are complex. Devices are difficult to uniformly maintain


and manage.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 7
Huawei Embedded PCM Solution
 The MSTP equipment and the PCM equipment are combined into one to
solve the multi-service access requirement of enterprise communication.
Telephone
Telephone

Fax machine
Fax machine
SDH Network

Computer
Computer

OptiX OSN OptiX OSN


Equipment Equipment
Switch
Switch

OptiX OSN equipments with PCM boards

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 8
Contents
1. PCM Technology Introduction

2. PCM Principles and Application

3. PCM Boards

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 9
FXS and FXO
 FXS and FXO ports are used for analog telephone lines. They are usually
used in pairs, similar to the relationship between a plug and a socket. An
FXO port receives the dial tone voltage from an FXS port.

 If a PBX is not used, the telephone's FXO port is directly connected to the
FXS port provided by the telephone company.

FXS FXO

Telephone company Telephone

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 10
Call Process between FXS and FXO Ports
 Initiating a Call
 A user picks up telephone A. The FXS-A port on the PBX enters the off-hook state.

 The user dials a number on telephone A. The number is converted into DTMF or pulse
digital signals and is transmitted to the FXS-A port of the PBX.

 Answering a Call
 The FXS-B port of the PBX acknowledges the call and provides ringing voltage for
telephone B.
FXS-A FXS-B
 Telephone B rings. FXO FXO

 Another user picks up telephone B to answer.


Telephone A PBX Telephone B
 Ending a Call
 The call ends when telephone A or B is hung up.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 11
BROSCHT
 A FXS port provides BROSCHT functions.
 B: Battery feeding

 R: Ringing

 O: Over voltage/current protection

 S: Supervision

 C: Codec/Decode

 H: Hybrid circuit

 T: Test

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 12
FXS/FXO Application (1)
 Z interface extension services
 An FXS port and an FXO port are used in pairs to transparently transmit
telephone data. In private network applications, an analog telephone connected
to a PBX sometimes needs to be extended to another place. This requirement
can be well addressed by Z interface extension services.
FXO FXO
FXO STM-N FXS
FXS
64kbit/s /E1/E3/E4 64kbit/s
64kbit/s 64kbit/s
PBX
FXO OptiX OSN OptiX OSN FXO
Equipment Equipment

PCM board

SDH line board/PDH tributary board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 13
FXS/FXO Application (2)
 Hotline services
 FXS ports are used in pairs to transmit private telephone data.

 Hotline services, also called driving telephone services between stations, are
used for point-to-point voice communication between adjacent stations and
can be connected upon off-hook without dialing.
FXS STM-N FXS
FXO /E1/E3/E4 FXO
64kbit/s 64kbit/s

OptiX OSN OptiX OSN


Equipment Equipment
PCM board

SDH line board/PDH tributary board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 14
E&M
 E&M is a trunk and signaling technology commonly used on PBXs. E&M
separates the signaling and voice trunks. E&M is also metaphorically called
Ear and Mouth, or RecEive and TransMit. An E&M port is usually connected
to a PBX, which outputs signals through the M line and receives signals
from the E line.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 15
2/4-Wire Audio and E&M Application (1)
 2/4-wire audio and E&M trunk
 OptiX OSN equipment is connected to PBXs using 2/4-wire audio and E&M
ports. E&M channels transmit signaling, while 2/4-wire audio channels transmit
audio services. OptiX OSN equipment functions as a trunk for the signaling and
audio services.
2/4-wire 2/4-wire
audio E&M STM-N audio E&M
64kbit/s /E1/E3/E4 64kbit/s
64kbit/s 64kbit/s
PBX PBX
(with E&M trunk) OptiX OSN OptiX OSN(with E&M trunk)
Equipment Equipment

PCM board

SDH line board/PDH tributary board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 16
2/4-Wire Audio and E&M Application (2)
 Point-to-point control signal transmission
 If only an E&M channel is available, the E&M channel transmits only one
channel of connectivity signals. It applies to scenarios where control signals are
remotely transmitted to implement remote control.

E&M E&M
STM-N
/E1/E3/E4
Control Control
center center

OptiX OSN OptiX OSN


Equipment Equipment

PCM board

SDH line board/PDH tributary board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 17
2/4-Wire Audio and E&M Application (3)
 2/4-wire audio signal transmission
 If a 2/4-wire audio line is used independently, it transmits only one channel of
audio signals. It is usually used on an audio terminal to transmit audio signals.

2/4-wire audio 2/4-wire audio


STM-N
/E1/E3/E4
Audio Audio
terminal terminal

OptiX OSN OptiX OSN


Equipment Equipment

PCM board

SDH line board/PDH tributary board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 18
G.703 64 kbit/s Codirectional Service
 In 64 kbit/s codirectional services, information (64 kbit/s service signals)
and related timing signals (64 KHz and 8 KHz timing signals) are
transmitted through a port in the same direction. One balanced cable pair
is used for each pair of transmission directions. The three types of signals
are transmitted as the same channel of signals.
G.703 64kbit/s codirectional service port

Terminal

OptiX OSN Equipment

PCM board 64kbit/s service signals

SDH line board/PDH tributary board Timing signals

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 19
G.703 64 kbit/s Codirectional Service Application
 G.703 64 kbit/s codirectional services transmit baseband digital services
over PCM channels by fully utilizing standard 64 kbit/s speech channel
resources. This type of service is commonly used in the dedicated
communication networks for electricity and railway transportation.

G.703 64kbit/s G.703 64kbit/s


codirectional service codirectional service
Relay STM-N Relay
protection 64kbit/s /E1/E3/E4 protection
64kbit/s
and and
control control
device device
OptiX OSN OptiX OSN
Equipment Equipment

PCM board 64kbit/s service signals


SDH line board/PDH tributary board Timing signals

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 20
Boolean Value Remote Access
 Multiple Boolean value signals are input using the PCM board that
supports the Boolean value remote access function. The combiner inside
the PCM board aggregates the Boolean value signals into one signal, and
the signal is transmitted to the sink end through the SDH line board/PDH
tributary board.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 21
Boolean Value Remote Access Application (1)
 Local dry contact signal termination

NE4

Dry contact
signal
Environment
monitoring STM-N/E1
Port1
unit
NE1 NE3

NE2
PCM board with dry contact signal function OSN Equipment
SDH line board/PDH tributary board Dry contact signal flow

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 22
Boolean Value Remote Access Application (2)
 Dry contact signal transmission

NE4

Dry contact Dry contact


Environment signal signal
Alarm/Monitoring
monitoring STM-N/E1
Port1 platform
unit Port11
NE1 NE3

NE2

PCM board with dry contact signal function OSN Equipment


SDH line board/PDH tributary board Dry contact signal flow

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 23
Boolean Value Remote Access Application (3)
 Dry contact signal combination

NE4

Environment Dry contact


monitoring signal A Dry contact
unit signal
Port1 Alarm/Monitoring
Port2 STM-N/E1 Port11
Environment platform
monitoring NE1 NE3
Dry contact NE2
unit
signal B

Environment Port1
monitoring Dry contact signal C
unit

PCM board with dry contact signal function OSN Equipment


SDH line board/PDH tributary board Dry contact signal flow

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 24
Sub-rate Application (1)
 X.50 D3-based encapsulation
 Sub-rate (X.50 D3-based encapsulation) services are used for synchronization
between narrowband devices and transmission of asynchronous data services,
at a rate ranging from 2.4 kbit/s to 48 kbit/s.

X.50 D3-based X.50 D3-based


encapsulation encapsulation
Sub-rate port Sub-rate port
STM-N
Sub-rate /E1/E3/E4 Sub-rate
Device A Device B

OptiX OSN OptiX OSN


Equipment Equipment

PCM board

SDH line board/PDH tributary board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 25
Sub-rate Application (2)
 V.110-based encapsulation
 Sub-rate (V.110-based encapsulation) services are used for synchronization
between narrowband devices and transmission of asynchronous data services,
at a rate ranging from 0.6 kbit/s to 56 kbit/s.

V.110-based encapsulation
Sub-rate Sub-rate Sub-rate
Sub-rate
port port Device A
Device A STM-N
/E1/E3/E4

64kbit/s
Sub-rate Sub-rate
timeslot
Device B OptiX OSN OptiX OSN Device B
shared
Equipment Equipment

PCM board

SDH line board/PDH tributary board

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 26
VF-Meeting Signaling Scheme
 VF-Meeting groups are managed through signaling interaction.
 Basic state: when the input signaling of all members in a VF-Meeting group is idle, the
output signaling of all the members is also idle.

 Calling state: when a first member receives busy signaling, the VF-Meeting group
broadcasts busy signaling to all the other members.

 Traffic state: when a second member receives busy signaling, the VF-Meeting group
transmits busy signaling only to the members that have received busy signaling and
transmits idle signaling to other members. When the VF-Meeting group is in the traffic
state, if a third member receives busy signaling, the VF-Meeting group also transmits
busy signaling to this member.

 Termination state: if the last member receives busy signaling, the VF-Meeting group
transmits idle signaling to all members.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 27
Conference Application
 Implement communication between the master and slave devices and
protection between slave devices.
Active network

Slave-1(active)

2M 2.4kbit/s~48kbit/s

2.4kbit/s~ Slave-2(standby)
48kbit/s OptiX OSN
OptiX OSN 3500
3500
Master STM-N

Slave-3(standby)
OptiX OSN
1500
2M 2.4kbit/s~48kbit/s

Slave-4(standby)
OptiX OSN 3500 OptiX OSN 3500

Standby network

DX1 board Valid service signals on the Slave->Master path


PDH tributary board Invalid service signals on the Slave->Master path
SDH line board Valid service signals on the Master->Slave path

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 28
Multiplexer Group Application (1)
 Data-P2MP Scenario

Slave-1
Slave-1
Master
Slave-2

Slave-3
Master Slave-2 Multiplexer
Group
Slave-1
OptiX OSN Master
Equipment AND Slave-2

Slave-3
Slave-3

Valid service signals from a salve device to the master device


DX1 board
Invalid service signals from a salve device to the master device
SDH line board Valid service signals from a master device to a slave device

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 29
Multiplexer Group Application (2)
 Data-Meeting Scenario
Multiplexer group
OptiX OSN Member-2
Equipment
Member-1 AND Member-3
Member-1 Member-4
Member-4
Member-1

Member-2 AND Member-3

Member-2 Member-3 Member-4

Member-1

Member-3 AND Member-2

Member-4
Member-1

Member-4 AND Member-2

Member-3

DX1 board Transmit data signals

SDH line board Receive data signals

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 30
Multiplexer Group Application (3)
 VF-P2MP Scenario

Slave-1
Slave-1
MAX(Slave-1, Master
Slave-2
Slave-2,Slave-3)
Slave-3
Master Slave-2
Multiplexer
group Slave-1
OptiX OSN Master MAX(Slave-1,
Equipment Slave-2
Slave-2,Slave-3)
Slave-3
Slave-3

PCM board Valid services from a slave device to the master device
SDH line board Valid services from the master device to a slave device

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 31
Multiplexer Group Application (4)
 VF-Meeting Scenario
OptiX OSN Multiplexer group
Equipment
Member-2
Member-1 Member-4 Aaudio
Member-1 Member-3
mixing
Member-4
Member-1
Aaudio Member-3
Member-2
Member-2 Member-3 mixing
Member-4
Member-1
Aaudio Member-2
Member-3
mixing
Member-4
Member-1
Aaudio Member-2
Member-4
mixing
Member-3

PCM board Transmit audio signals


SDH line board Receive audio signals

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 32
64K SNCP (1)
 64K SNCP is a protection scheme that provides dual feed and selective receiving
and improves service reliability.
 The source ports transmit a service on both the working channel and the protection
channel. The sink port receives the service only on the working channel.

 When the working channel becomes unavailable, protection switching occurs on the sink
port and the sink port receives the service on the protection channel.

 In the 64K SNCP scheme, the source working, source protection, and source
timeslots must be configured on the same DXM board, and send services to
different boards through VC12 cross-connections based on 64K timeslots.

 The DXM board can provide 64K timeslot protection for services on FXSO12 and
AT8 boards.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 33
64K SNCP (2)
 64K SNCP specifications
Item Specifications
Maximum Number of OptiX OSN 550:256
Protection Groups on an NE OptiX OSN 580:1024
Service Level 64K
Revertive Mode Non-revertive
Switching Time ≤50ms
Trigger conditions of automatic switching:
 LFA/LMFA
 Channel associated signaling (CAS, F.E1 timeslot 16) failure
Switching Condition (Any of
indication switching: Switching is triggered if the received CAS
the Conditions Triggers
value is the same as the configured CAS value.
Switching)
Trigger conditions of external switching:
 Trigger forced switching to the protection channel.
 Trigger forced switching to the working channel.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 34
Quiz
1. Please describe the call process between Telephone A and Telephone B.

FXS-A FXS-B
FXO FXO

Telephone A PBX Telephone B

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 35
Contents
1. PCM Technology Introduction

2. PCM Principles and Application

3. PCM Boards

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 36
FXSO12 (1)
 Functions and features supported by an FXSO12 board
Function and Feature Description
Provides 12 FXS/FXO ports, which can be configured on the NMS.
Basic Functions
Supports the BROSCHT function.
Service Processing Transmits/Receives and processes 12-channel 64 kbit/s signals.
Overhead Processing Supports the setting and query of the V5 and J2 bytes.
Supports warm and cold resets. The warm reset does not affect services.
Maintenance Features Supports the query of the manufacturing information of the board.
Hot board swapping.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 37
FXSO12 (2)
 Application 1: FXS and FXO ports are configured in pairs, providing the Z-
interface extension function.
NE1 NE2

64kbit/s Cross- SDH/ SDH/ Cross- 64kbit/s


FXSO12 Connect PDH PDH Connect FXSO12
64kbit/s Boards Boards Boards Boards 64kbit/s

PBX(Private Branch Exchange)

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 38
FXSO12 (3)
 Application 2: FXS and FXS ports are configured in pairs, providing the
hotline function.

NE1 NE2

64kbit/s Cross- SDH/ SDH/ Cross- 64kbit/s


FXSO12 Connect PDH PDH Connect FXSO12
64kbit/s Boards Boards Boards Boards 64kbit/s

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 39
FXSO12 (4)
 Parameters specified for the electrical interfaces of the board
Parameter FXS FXO
RJ-11 (OSN1500/OSN3500)
Type of Interface
DB44 (OSN550/OSN580)
Channel Bandwidth (Hz) 300~3400
Port Impedance (Ω) Default value:600
-12.0 to 0.0, in steps of 0.5 -16.5 to 13.5, in steps of 0.5
TX Gain (dB)
Default value: -7.0 Default value: 0.0
-6.0 to 5.0, in steps of 0.5 -16.5 to 11.0, in steps of 0.5
RX Gain (dB)
Default value: 0.0 Default value: -3.5
Maximum Loop Impedance (Ω) 1800 -
Ring Flow AC Amplitude (Vrms) Support: 45/50/65 -
Specifications of the Interface Complies with ITU-T G.711, G.712, Q.552.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 40
AT6 (1)
 Functions and features supported by an AT6 board

Function and Feature Description


Supports the E&M trunk functions.
Supports 2/4-wire audio signals.
Basic Functions
E&M signaling
N2AT6 supports the association of signaling and services.
Service Processing Transmits/Receives and processes six-channel 64 kbit/s signals.
Overhead Processing Supports the setting and query of the V5 and J2 bytes.
Supports warm and cold resets. The warm reset does not affect services.
Maintenance Features Supports the query of the manufacturing information of the board.
Hot board swapping.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 41
AT6 (2)
 Application 1: PBXs are connected to OptiX OSN equipment through 2/4-
wire and E&M ports. Signaling is transmitted over E&M port and voice
services are transmitted over the 2/4-wire. OptiX OSN equipment is
equivalent to a signaling and voice relay.
NE1 NE2

64kbit/s Cross- SDH/ SDH/ Cross- 64kbit/s


AT6 Connect PDH PDH Connect AT6
64kbit/s Boards Boards Boards Boards 64kbit/s

PBX(Private Branch Exchange)

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 42
AT6 (3)
 Application 2: A single E&M port transmits only one channel of availability
signals, generally remotely transferring enabling/disabling signals to
achieve remote control.

NE1 NE2
Control
Controller
center

64kbit/s Cross- SDH/ SDH/ Cross- 64kbit/s


AT6 Connect PDH PDH Connect AT6
64kbit/s Boards Boards Boards Boards 64kbit/s

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 43
AT6 (4)
 Parameters specified for the electrical interfaces of the board - Anea 96

Parameter Value
Type of Interface Anea 96
Channel Bandwidth (Hz) 300~3400
Port Impedance (Ω) 600
2/4 wire
TX Gain (dB) -20 to 1.5, in steps of 0.5 Default value: -18.0
RX Gain (dB) -7.0 to 14.0, in steps of 0.5 Default value: -5.0
Supports Bell I to V and SSDC5 types. Each type supports
Type the configuration of Signaling unit and Trunk circuit
modes.
E&M
Voltage -48V、-12V
2E2M Support

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 44
DX1&DM12 (1)
 Functions and features supported by an N1DX1 board
 Works with the interface board to receive, process, and transmit 8 channels of
Nx64 kbit/s services (1≤N≤31) and 8 channels of framed E1 services.

 Works with the interface board to receive and process 8 channels of sub-rate
services. Synchronous/Asynchronous (2.4 kbit/s, 4.8 kbit/s, 9.6 kbit/s, 19.2 kbit/s
and 48 kbit/s) services can be encapsulated in X.50 D3 mode.

 Cross-connects 48 channels of E1 signals at the 64 kbit/s level on the system


side.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 45
DX1&DM12 (2)
 Functions and features supported by an N3DX1 board
 Works with the interface board to receive, process, and transmit 8 channels of
sub-rate services or 8 channels of Nx64 kbit/s services (1≤N≤31), 8 channels of
framed E1 services, and 8 channels of G.703 64 kbit/s codirectional services.

 Receives and processes 8 channels of PCM services through ports on its front
panel, including the following services:
 2 channels of RS-232 sub-rate services and 2 channels of RS-485 sub-rate services.

 2 channels of analog voice signals at the FXS/FXO port

 2 channels of 2/4-wire audio and E&M signals, N3DX1 board supports the setting of
2/4-wire audio modes by both setting DIP switches manually on the board and using
the NMS, and N3DX1 supports the Bell type V and SSDC5 E&M signaling.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 46
DX1&DM12 (3)
N3DX1 Processing Board
DM12 Interface board External ports Internal ports
(smaller slot)
Framed E1 E1 DDN1~8
DDN1 DDN9
N×64kbit/s DDN2 DDN10 TU1-DDN65
or sub-rate DDN3 DDN11 TU2-DDN66
DDN4 DDN12
TU3-DDN67
DM12 Interface board TU4-DDN68
(bigger slot) 64K TU5-DDN69
G.703 64kbit/s ..
E1/Co64K DDN17~24 .
Codirectional XC TU59-DDN123
DDN1 DDN13
N×64kbit/s DDN2 DDN14
or sub-rate DDN3 DDN15 TU60-DDN124
DDN4 DDN16
TU61-DDN125
Panel
RS-485 RS485 DDN25~26 TU62-DDN126
RS-232 RS232 DDN27~28 TU63-DDN127
FXS/ FXSO1 DDN29
FXO FXSO2 DDN30
EM1 DDN31
E&M and 2/4-wire DDN32
EM2

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 47
PFL1 (1)
 Functions and features supported by an PFL1 board
Function and Feature Description
Basic Functions Processes 8xE1 optical signals.
Receives/Transmits and processes 8 x E1 optical signals.
Supports transparent transmission in compliance with IEEE C37.94 to meet the
Service Processing
transmission requirements of relay protection equipment in the electric power
industry.
Overhead Processing Processes path overheads at the VC-12 level, such as J2 bytes.
Supports inloops and outloops at optical ports.
Supports warm and cold resets. Warm resets do not affect ongoing services.
Supports the query of board manufacturer information.
Maintenance Features
Supports in-service FPGA loading.
Supports board software upgrades without affecting ongoing services.
Supports the PRBS function. Hot board swapping.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 48
PFL1 (2)
 The PFL1 is a PDH processing board. The PFL1 can be used on the OptiX
OSN equipment series to receive/transmit and process 8xE1 optical signals.

2M optical 2M optical
signals signals
compliance compliance
with IEEE with IEEE
C37.94 C37.94

Relay OptiX OSN OptiX OSN Relay


protection protection
equipment equipment

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 49
PF4E8
 Functions and features supported by an PF4E8 board
Function and Feature Description
Receives and processes 4 channels of 2M optical signals and 8 channels of
E1/T1 electrical signals.
Service Processing Supports transparent transmission in compliance with IEEE C37.94 to meet
the transmission requirements of relay protection equipment in the electric
power industry.
Overhead Processing Processes path overheads (such as J2 and V5 bytes) at the VC-12 level.
External ports supports Inloops and Outloops at optical and electrical ports
Warm resets and cold resets (with warm resets having no impact on services)
Maintenance Features
Board manufacturer information query
External ports supports PRBS function

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 50
DIO
 Functions and features supported by an DIO board
Function and Feature Description
Supports 10-channel input dry contact signals and 4-channel output dry
Service Processing
contact signals, or four channels of cabinet lightening output signals.
Overhead Processing Processes path overheads at the VC-12 level, such as J2 and V5 bytes.
Alarms and Performance Provides various alarms and performance events, facilitating equipment
Events management and maintenance.
Supports warm resets and cold resets. Warm resets do not adversely affect
services.
Maintenance Features
Supports querying board manufacturer information.
Supports hot board swapping.

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 51
Quiz
1. Which of the following equipment does not support the PCM feature?( )
A. OSN 550

B. OSN 2500

C. OSN 3500

D. OSN 7500

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 52
Conclusion
 PCM Technology Introduction

 PCM Principles and Application

 PCM Boards

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 53
Recommendation
 Huawei Learning Website
 http://support.huawei.com/learning/en/newindex.html

 Huawei Support Case Library


 http://support.huawei.com/enterprise/servicecenter?lang=en

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 54
Thank You!
www.huawei.com

Copyright © 2018 Huawei Technologies Co., Ltd. All rights reserved. Page 55
 Replacing a board may interrupt services. Therefore, exercise caution when
performing this operation. If you have any questions, contact Huawei.
 During the board replacement, the protection switching time meets the
requirements of the telecom industry.
 The general principles for preventing static electricity are as follows:

 Correctly ground the equipment according to the grounding requirements of


the equipment.

 Wear an ESD wrist strap during the operation.

 Before touching devices, boards, and IC chips, wear an ESD wrist strap and
insert the other end of the ESD wrist strap into the ESD jack on the subrack.

 If there is no ESD wrist strap, wear ESD gloves.


 Removing a board that carries services may interrupt services. You are advised to
perform hot swap for boards during off-peak hours.

 Note:

 Do not exert excessive force when inserting the board. Otherwise, the pins on
the backplane may be bent.

 Insert the board along the anti-misinsertion guide slot of each board slot to
prevent the components on the board from contacting each other and
causing short circuit.

 When holding a board, do not touch the electrical appliances, components,


cable connectors, or cable troughs on the board.

 After the board is inserted into the backplane, it takes about several minutes
to start the board.

 If a board is connected to a cable or fiber on the front panel, remove them first,
then remove and insert the board.
 Here, we use the N4GSCC board of Optix OSN 3500 as an example to describe the
mapping between the input voltage of the equipment and the setting of the
power jumper of the GSCC board. The jumper settings of the MSTP equipment and
the control board are slightly different, therefore, you need to set the setting of
the jumper cap according to the type of the on-site equipment and the type of the
board to be configured. Set the power input of the jumper cap on the spare board
to be the same as that of the board to be replaced.
 Note: Ensure that the software versions of the active and standby SCC boards are
the same. Otherwise, data cannot be synchronized between the active and standby
SCC boards, and the NE reports the MSSW_DIFFERENT alarm.
 Operation Procedure

 In the workbench view, double-click the Main Topology icon to enter the
main topology view.

 In the Main Topology, right-click the NE whose current alarms need to be


queried and choose Browse Current Alarms from the shortcut menu.
 Step 3: Upload the configuration data of the GSCC board.

 Choose Configuration > NE Configuration Data Management from the main


menu.

 In the left pane, select the NE whose board needs to be replaced and
click . The NE is displayed in the Configuration Data Management List.

 Right-click the NE in the Configuration Data Management List and choose


Upload from the shortcut menu.

 A progress bar is displayed, indicating the upload progress. Close the


Operation Result dialog box after the upload is complete.
 Step 4: Check the data consistency between the U2000 and NEs. If the verification
is successful, perform the subsequent operations.

 Choose Configuration > NE Configuration Data Management from the main


menu.

 Select an NE from the Object Tree on the left and click .

 Click Check Consistency.

 In the displayed Confirm dialog box, click OK. A progress bar is displayed
indicating the operation progress.

 In the Operation Result dialog box, click Close.

 When the data on the U2000 is inconsistent with that on the NE, you need to
upload the data on the U2000 to ensure that the data on the U2000 is consistent
with that on the NE.
 Perform an active/standby switching on the faulty board.

 Right-click on the NE icon and choose NE Explorer from the shortcut menu.

 Choose Configuration > Board 1+1 Protection from the Function Tree.

 In the 1+1 Protection Relationship List, if the values of Current Board and
Working Board are the same, no active/standby switching occurs. If the
values of Current Board and Working Board are different, the active/standby
switching has occurred. After confirming that the faulty board is the
protection board, proceed with the following steps.
 After the new control board starts working, generally, the system automatically
synchronizes data, including NE IDs and IP addresses. In addition, it takes a long
time to back up data in batches. Therefore, pay special attention to this.
 In the case of configuring only one control board, if the control board is faulty, the
U2000 cannot log in to the NE.

 After the new control board is replaced, you can download data from the U2000
for restoration.

 Before downloading data, ensure that the IP address and NE ID of the control
board are correctly set. In this case, the U2000 can log in to the NE.

 Step 10: Download the original configuration data.

 Choose Configuration > NE Configuration Data Management from the main


menu.

 In the left pane, select the NE whose board needs to be replaced and click
the button. The NE is displayed in the Configuration Data Management List.

 Right-click the NE in the Configuration Data Management List and choose


Download from the shortcut menu.

 A progress bar is displayed, indicating the download progress. Close the


Operation Result dialog box after the download is complete.
 Step 11: Configure the user of the NE.

 After logging in to the NE as user “root”, manually add the user of the
original control board.
 When the cross-connect and timing board is faulty, you need to replace the faulty
board as soon as possible to restore services.

 When the board works in 1+1 hot backup mode, replacing the cross-connect and
timing board does not affect services, but the packet services are interrupted.

 If a single cross-connect board is configured and the board is faulty, services may
be interrupted. In this case, you can directly replace the cross-connect board and
ensure that the software versions of the old and new cross-connect boards are the
same.
 Impact on the System

 If the board is not configured with protection, replacing the board interrupts
the services carried on the board.

 If the board is configured with network protection, replacing the board does
not affect services when the switching is normal.
 PDH boards are classified into interface boards and processing boards. When
different types of boards are replaced, the impact on services are as follows:

 If the interface board is replaced, the services on the replaced interface board
are interrupted.

 If the processing board to be replaced is configured with TPS protection,


services are protected. If the processing board to be replaced is not
configured with TPS protection, services are interrupted.
 From V200R013C00 of MSTP equipment onwards, the equipment supports the
PCM board. When the PCM board is faulty, you need to replace the faulty board to
ensure the normal running of services.

 Removing a board that carries services interrupts services. Therefore, you are
advised to replace the PCM board during off-peak hours.
 As the MSTP products enter the mature stage, some early boards (old boards for
short) are out of production due to manufacture discontinuation of key
components or marketing strategies. However, these old boards have some
inventory on the live network, the spare parts of these boards must be provided
for the customer. In this regard, the newly developed boards of the MSTP product
must be able to work with the NE of the old version to function as the spare part
of the old board. In addition, the new board can work with the NE of the new
version to implement the new functions. The board version replacement function
solves the compatibility problem between the board version and the NE software
version, implements smooth replacement, expansion, and maintenance between
the old and new boards, and reduces the O&M cost.

 The multi-IDs board indicates a new ID board that has multiple board IDs. After
the new ID board replaces the old ID board, the new board works normally with
the board ID of the old board. In this case, the new board can also work properly
by using the new board ID if the new board is not used to replace an old board.
 Board Replacement: In the case of a board fault or a replacement requirement due
to other reasons, you need to replace the old ID board with new ID board.

 Board Capacity Expansion: Add new boards in the idle slots for capacity expansion.
In this case, the corresponding logical board is not configured on the NE on the
U2000. You need to first add a logical board supported by the NE software and
then insert a new board.

 Board ID Modification: A multi-ID board can work normally as the old ID board or
the new ID board. If the ID of the running multi-ID board does not match the NE
software due to the configuration causes, you need to change the board ID. In this
case, delete the original logical board and then add the logical board that is
supported by the NE software.

 In the case of some data boards such as the N4EFS0 and N2EFS4 boards, the
board IDs of the data boards matches the NE software automatically. In this
manner, the data boards work normally with the newly configured logical
boards.

 In the case of other boards, perform a cold reset on the boards manually.
Wait for some time and then the boards work normally with the newly
configured logical boards.
 The cross-connect capacity of the MSTP system is determined by the cross-
connect board. Therefore, the cross-connect capacity is upgraded by replacing the
cross-connect board.
 The following uses the upgrade from GXCSA to UXCSB as an example to describe
how to upgrade the capacity of the cross-connect board.
 Replace the standby cross-connect board

 On the U2000, double-click the icon of the NE to be upgraded, delete the


logical board of the standby cross-connect board SSN1GXCSA and add it as
SSN1UXCSB.

 Remove the standby cross-connect board SSN1GXCSA from the equipment


and insert the prepared SSN1UXCSB.

 After the standby board SSN1UXCSB goes online, the SRV indicator on the
board turns on. Wait for about 15 minutes, and then check the NE alarms,
ensure that the HSC_UNIVAIL alarm is cleared.
 Switching between the active and standby cross-connect boards: Right-click the
cross-connect slot and choose Configure Active/Standby Switchover from the
shortcut menu. The 1+1 protection group list is displayed. The first protection
group is the protection group of the cross-connect board. Right-click the first
protection group and choose Perform the active/standby switchover to perform
the active/standby switching of the cross-connect boards.
 Consistency check

 After the standby cross-connect board starts up, query the physical board. If
the active and standby cross-connect boards are normal, wait for 5 minutes.
The U2000 verifies the consistency. If the verification is successful, the
upgrade is successful. After the upgrade is complete, check the version
number of the cross-connect board and the alarms on the NE.

 Consistency check: Choose Configuration > Configuration Data Management


from the main menu. In the left pane, select the NE to be verified. In the
Configuration Data Management List window, click and choose the NE to
perform Consistency Check.
 Note:
 The SCC board is directly replaced by the GSCC board. Therefore, do not
perform any upgrade on the SCC board in case of rollback.

 The SCC and GSCC of the same software version can be synchronized on the
same NE. However, after a period of time, the standby SCC will be reset.
Therefore, do not place the GSCC and SCC of the same software version on
the same NE at the same time. If the SCC and GSCC of different software
versions are placed on the same NE. The data cannot be synchronized
because the software versions are different.

 This method applies only to the scenario where the ID, IP address, and software
version of the control board before and after the replacement are consistent and
no data is available in the GSCC to be replaced.
 In this moment, the NE is temporarily unreachable. After the NE can log in to the
NE again, delete the active SCC board on the NE and add the GSCC logical board
in the corresponding slot. In addition, the GSCC logical board is added to the
physical slot of the first GSCC board. After the GSCC board is added, the board
should be displayed as online on the U2000.
 Deliver the data from the NMS to the NE. The procedure is as follows:

 Before the download, check whether there is a shared ring MSP (one optical
port is configured with two ring MSP groups). If yes, delete a protection
group from the shared ring MSP before downloading. Note down the
attributes of the related NEs, boards, and protection groups before the
deletion.

 After the download is successful, run the command :cfg-set-multimsp:bid,


pid, 2 on the Navigator to enable the multiplex section sharing function on
the optical port. Then, use the NMS to configure the ring MSP group that is
deleted before.
 Precautions: Smooth upgrade is not supported. This operation will interrupt
Ethernet services.
 Answer: ABCD
 Choose Service > SDH Trail > Search for SDH Trail from the main menu of the
U2000.

 In the Search Policy area, select Discrete cross-connections that have not formed
trails and click Next. The U2000 starts to search for trails. The U2000 deletes the
trail that is not set with the management flag at the network layer. In this case, you
can view all the discrete services on the network.
 Note: If a service is searched as a discrete service, it does not indicate that the
service cannot run normally on the NE side.

 The trail configuration is affected.

 During E2E service configuration (trail configuration) on the U2000, the


U2000 considers the timeslots occupied by discrete services as idle timeslots
when selecting the source and sink of a trail, calculating routes, and
determining timeslots. During trail configuration, the source and sink
timeslots occupied by discrete services may be used repeatedly. As a result,
the entire trail fails to be created. In this case, the U2000 displays a message
indicating that service timeslots conflict.

 The queried trail resources are incomplete.

 When you query the related trails of a tributary board, if not all trails are
searched out before, the query result may be incomplete. Trail resources
cannot be queried correctly by using the trail management function.

 The timeslot allocation diagram is incorrect.

 The timeslot allocation diagram report function of the U2000 is based on


trails. The timeslots are displayed only when the corresponding node of the
protection subnet has trails. The timeslots occupied by discrete services
cannot be displayed in the timeslot allocation diagram.

 The resource statistics report is inaccurate. Circuit data is incomplete and cannot
be used as data sources of other systems (such as NMS, Resource System,
Performance Analysis System, and Network Evaluation System) and affect the
effective use of other systems.
 Unidirectional trail: it consists of working path and protection path. The protection path is
determined by self-healing network’s structure.

 Bidirectional trail: If the source and sink of a trail are the sink and source of
another trail in the same level, then these two unidirectional trails could be considered
as bidirectional trail.
 Answer to question 1:

 The trail levels on the U2000 include VC4-XC (X=4, 8, 16, and 64), VC4 server
trail, VC4, VC3, and VC12.

 The server trail is on the U2000 side, while the client trail is on the NE side.

 Answer to question 2:

 Discrete services are classified into the following types: VC4_64C, VC4_16C,
VC4_8C, VC4_4C, VC4, VC3, and VC12

 For a cross-connection, after trail search, it is either a discrete service or a


trail formed with other cross-connections. It is impossible to belong to
discrete services and to form trails with other cross-connections.
 Prerequisites for Trail Search

 You must be an NM user with "maintainer group" authority or higher.

 The U2000 License supports the end-to-end SDH management function.

 NE data has been uploaded to the U2000.

 The fiber connections between NEs are correctly set up.

 The services on the per-NE basis must be correctly configured.

 The cross-connection configuration on the NE is incomplete or the configuration is


incorrect:

 The service timeslots of the protection route are not configured. This
situation occurs in the SNCP protection subnet.

 The service timeslots of the protection route are incorrectly configured.

 The SNCP service is incorrectly configured. As a result, the protection route is


incomplete.

 The service configuration is incomplete. Check whether incomplete services


are configured on a per-NE basis (for example, no pass-through service is
configured at the pass-through site). Complete the service configuration and
search for trails again.

 If the source or sink port of a service is not within the management scope of the
NMS or the source and sink ports are not managed by the NMS, the service is
called an outgoing service.
 If you are concerned about the analysis and judgment result, deactivate the service
and then wait for one minute. Then, check whether new alarms such as PS and TU-
AIS are generated. If not, delete them. If yes, it means the analysis result is
incorrect, you should activate the service immediately.
 Handling Method

 S1: On NE2, create a outgoing optical port for the outgoing service, and then
search for trails. In this case, the cross-connections between NE1 and NE2
can be searched as the path between NE1 and NE2.

 S2: Create T1 as a preconfigured NE or a virtual NE of platform 5.0, create a


fiber between T1 and NE2, and create a protection subnet consisting of T1
and NE2. Configure a single station service corresponding to NE2 on T1.
Search for trails. The trail between NE2 and T1 can be searched out.

 S3: Create T2 and T3 as preconfigured NEs or virtual NEs on platform 5.0,


create fibers between T2, T3, and NE4, and create a protection subnet
consisting of T2, NE4, and T3. Configure single-station services on T2 and T3.
Search for trails. The trails between T2 and T3 can be searched out.
 Check whether the data on the U2000 is consistent with that on the NE side. If not,
upload the configuration data on the NMS.

 Make sure that the fiber connection created on the U2000 is consistent with the
actual fiber connection of the equipment.

 Search for protection subnets and ensure that all protection subnets have been
searched out, and the optical ports and virtual NEs have been created (if
necessary). If the search result is not found, analyze and handle the problem first.

 Search for trails on the NMS.

 Back up user NMS data. Back up the MO data, back up the database, and export
the script file.
 Tips for clearing discrete services: Handling the discrete services of the add/drop
tributary and the outgoing discrete services, and then handle the pass-through
discrete services.
 Answer:

 Cause analysis: The unidirectional SNCP ring is tangent to the bidirectional


MSP ring. In the trail resource report, there is an E1 bidirectional trail
between NE1 and NE4. If the trail cannot be searched out, check the service
configuration integrity of each node, for NE3, there should be two dual-fed
services and one selective receiving service.
 After clearing discrete services, guide users to use the trail function to configure
services.

 To avoid a large number of discrete services, you are advised to read the U2000
online help or operation guide before configuring services.

 When discovering discrete services, handle the discrete services as soon as


possible to avoid affecting subsequent service configuration.
 The causes of discrete services are as follows:

 The data on the U2000 is inconsistent with that on the NE, or the fiber
connection is inconsistent with the actual fiber connection on the equipment.

 The configuration data does not meet the requirements. As a result, the
U2000 cannot search for trails.

 The service configuration is incomplete and the trail cannot be formed.

 Outgoing services are not configured with Outgoing optical ports, virtual NEs,
or preconfigured NEs.

 NMS search restriction.

 The services are not in use (or the services reserved by the customer are not
used for circuits) and junk services.
 The following describes the fields in an Ethernet frame:

 Preamble: Indicates the preamble, which is 7 bytes long. The bit pattern of
each byte is 10101010, which is used for timing synchronization between the
transmit end and the receive end.

 SFD: Start Frame Delimiter. The bit pattern is 10101011, which is used to
inform the receive end that the next byte is the beginning of the frame.

 DMAC: Indicates the destination MAC address. The length is 6 bytes.

 SMAC: Indicates the source MAC address. The length is 6 bytes.

 Length/Type: The length is 2 bytes. The meaning varies according to the


value.

 If Length/Type > 1500, it indicates the type of the data frame (upper-
layer protocol type. For example, 0x0800 indicates that the L3 data is an
IP packet).

 If Length/Type <= 1500, it indicates the length of the data and the
padding field in the data frame.

 Data: The length is 46 ~ 1500 bytes. If the length of the Data field is less than
46 bytes, padding bytes must be filled to ensure that the length of the entire
frame is at least 64 bytes.

 FCS: Frame check sequence. The length is 4 bytes.


 The frame length range is 64 ~ 1518 bytes.
 Due to the limitation of the CSMA/CD algorithm, the standard Ethernet frame
length should not be less than 64 bytes, which is determined by the maximum
transmission distance and conflict detection mechanism.

 The minimum frame length is specified to avoid this situation: A site has sent the
last bit of a data packet, but the first bit of the packet is not transmitted to the
remote site. In this case, the remote site considers that the line is idle and sends
data. As a result, a conflict occurs.

 The upper layer protocol must ensure that the minimum length of the Data field in
an Ethernet frame is 46 bytes. If the length is less than 46 bytes, the upper layer
protocol must fill redundant bits to make the length of the Data field reach 46
bytes. A 46-byte data field, a 14-byte Ethernet frame header, and a 4-byte
verification code form a 64-byte minimum Ethernet frame.

 In an Ethernet frame, the maximum length of the Data field is 1500 bytes.
 Network Device Performance Verification Test

 The test results show the forwarding capability, data cache capability, and
burst data processing capability of a single device. The test indicators include
throughput, latency, frame loss rate, and back-to-back.

 Network Performance Test

 The test results show the bandwidth, forwarding capability, and end-to-end
service latency of the entire network. Generally, the test indicators include
throughput and latency.

 Customized Test

 Based on actual service requirements and distribution, test methods


customized by testers may be the combination of the first two test methods,
mainly for large-scale network tests.
 During the test of a single device, there are two physical links between the tested
device (DUT-Device Under Test) and the Data Analyzer (Tester). The connected
ports can be an optical interface or an electrical interface according to the actual
service requirements, select the port rate according to the actual service
requirements. During the test, use the two ports of the Data Analyzer and the two
ports of the tested device.

 After the physical connection is complete, the Data Analyzer sends data streams to
the tested device through a port. After the data flows pass through the tested
device, the data streams are sent back to the other port of the Data Analyzer, in
this case, the Data Analyzer can calculate the throughput, forwarding latency,
frame loss rate, and back-to-back of the tested device through the transmitted
and received data streams.

 During the test, if only one port is used to transmit data and the other port is used
to receive data, the performance data is the unidirectional performance data of the
tested device.

 If the two ports of the data analyzer transmit and receive data streams at the same
time, the obtained performance data is the bidirectional performance data of the
tested device.

 Finally, the performance data of the tested device is compared with the data
provided by the vendor to verify and confirm the performance of the tested device.
 In the network performance test, there are two physical links between the tested
system (SUT-System Under Test) and the Data Analyzer (Tester). The port can be
an optical interface or an electrical port according to the actual service
requirements. The port rate can be selected according to the actual service
requirements, during the test, use the two ports of the Data Analyzer and the two
ports of the tested system.

 You can select two ports at the same site or select one port at each site (because
only one Data Analyzer is used. When the distance between sites exceeds the
maximum distance of signals sent by the Data Analyzer, different SUT ports cannot
be selected as access ports). The tested data streams sent from the Data Analyzer
must traverse the entire system or traverse the paths of actual data flows. Note
that the two physical links between the tested system (SUT-System Under Test)
and the Data Analyzer (tester) cannot be transferred through other DCN networks.
Otherwise, the accurate test result cannot be obtained.

 After the physical connection is complete, the Data Analyzer sends the data flow to
the tested system through a port. The data flow is forwarded by the tested system
and then returned to the other port of the Data Analyzer, in this case, the data
analyzer can calculate the system latency and QoS of the tested system through
the transmitted and received data streams.

 During the test, if only one port is used to transmit data and the other port is used
to receive data, the performance data is the unidirectional performance data of the
tested system.
 If the two ports of the Data Analyzer transmit and receive data streams at the
same time, the obtained performance data is the bidirectional performance
data of the tested system.

 Finally, the performance data of the tested device is compared with the data
provided by the vendor to verify and confirm the performance of the tested
system.
 The device test and network test are more common tests. The customized test is
more specific. It can be used to perform customized tests on certain services and
links in the tested system. The Data Analyzer can be two or more data analyzers,
these testers communicate with each other through the DCN and implement
synchronization through the GPS satellite. In this way, the system latency,
throughput, and packet loss rate can be tested.

 As shown in the slide, the tested system forwards the tested data flow as a specific
service (such as private line service and VIP customer service). The two data
analyzers belong to different places and are synchronized through the GPS, one is
used to send data streams, and the other is used to receive data streams
forwarded by the tested system. In the result test, the system latency, throughput,
and packet loss rate of the service can be obtained.

 Note:

 The Data Analyzer can communicate with each other through the DCN
network. However, the physical link between the data analyzer and the tested
system cannot contain the DCN network or other data communication
equipment. Otherwise, the accurate test result cannot be obtained. Because
the network includes the tested system, DCN network, and other data
communication devices, the test scope is expanded.
 The test of the MSTP Ethernet feature mainly focuses on the Throughput and
Latency.
 Because the standard Ethernet frame length (Ethernet II) is not fixed, the
length of the frame is specified before the throughput is tested (the typical
Ethernet frame length is: 64, 128, 256, 1024, 1280, and 1518 Bytes. Throughput
tests must be performed for each frame length and the test duration must be
at least 60 seconds.

 When Y=Xmax, the frame rate actually transmitted by the Data Analyzer is the
maximum frame rate supported by the device.

 If the number of transmitted frames and the number of received frames


of the data analyzer are the same, it indicates that the throughput of the
device in the framing length is equal to the maximum frame rate. That is,
the maximum frame forwarding rate in the case of no packet loss is equal
to the maximum frame rate supported by the device (in normal cases).

 If the number of packets sent by the data analyzer is less than the
number of received frames, frame loss occurs on the device. For MSTP,
the number of virtual paths bound to the SDH path may not be sufficient
to carry the services accessed by the port, that is, the bandwidth is
insufficient. for example: The access rate of the physical FE port on the
Ethernet board is the line rate, that is, 100Mbit/s. Actually, the virtual path
bound to the SDH side is at the VC-12 level and the number is 10.
Therefore, the Ethernet board cannot bear all the data accessed from the
external port, frame loss occurs.

 Maximum rate of the media: Indicates the maximum rate supported by


the device when the IFG is 12 Bytes (minimum inter-frame gap). The unit
is bit/s, that is, the line rate. For example: The line rate of the FE interface
is 100Mbit/s. The minimum inter-frame gap is a limit. It indicates that the
device is in full load state and the interval between frames reaches the
minimum. In this case, the line rate can be reached.

 Maximum frame rate: Indicates the fixed frame rate supported by the
physical port of the device at the line rate. The unit is frame/s.

 Based on the line rate, frame length, and minimum inter-frame gap (IFG),
you can calculate the maximum line-rate frame rate in the case of a
specific frame length.
 The throughput test process is as follows:

 A single data flow from one source port to one destination port on the tester.

 Set the initial load to the maximum rate (maximum frame rate) used by the
tested system interface.

 Packets with a specified length are sent from all source ports to the
destination port. After the completion, measure the number of packets sent
and received on all data streams.

 If packet loss occurs, reduce the load and repeat the test.

 Use the dichotomy to increase or decrease the load in subsequent repeated


tests until the load difference between success and failure is less than the
resolution of the test. This is zero loss throughput.

 The throughput is expressed in percentage. For example, if the throughput is 90%,


it indicates that the maximum frame forwarding rate is 90% of the maximum frame
rate supported by the device in the case of no packet loss. Sometimes the
throughput is indicated by packet/s or frame/s.
 The IFG, Preamble, and SFD are removed from the Ethernet board. If the tag (4
Bytes VLAN label) is added to the board, the frame structure is as follows:
<DA><SA> <VLAN> <TYPE><Data+PAD><FCS>. In this case, the frame length is
L+4.

 Note that the VLAN tag of the 4 Bytes is added to the Ethernet standard frame by
the Ethernet board. For the pure transparent transmission board such as the EFT,
the VLAN tag does not need to be added. The signals with tags can be directly
accessed.

 If MPLS or QinQ is not enabled, the GFP header can be directly added to the SDH,
as shown in the following figure.
Ethernet MAC Frame GFP Frame
Octets
2 PLI
2 cHEC
Octets 2 Type
7 Preamble 2 tHEC
1 Start of Frame Delimiter 0 - 60 GFP Extension Hdr
6 Destination Address (DA)
6 Source Address (SA)
2 Length/Type GFP
MAC client data Payload
Pad
4 Frame Check Sequence (FCS)

Bit # 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7

 In this case, the frame structure is <GFP Core Header><GFP Playload Header>
<DA><SA><VLAN><TYPE><Data+PAD><FCS><GFP FCS>, and the frame length
is: L+4 (VLAN)+4 (GFP Core Header)+4 (GFP Playload Header)+4 (GFP FCS)
 Example:

 The structure of C12 is 9 rows and 4 columns, and the first and last bytes are
missing. The structure is as follows:

1 4

C-12

 Capacity of the C12 container = (9×4-2)×8×8000=2176kbit/s


 In more cases, the throughput is tested by the Data Analyzer. The theoretical value
is only for reference. The throughput theoretical calculation formula is used in the
scenario where the throughput requirement is 100%, the container bandwidth and
the frame length are known, and the number of virtual tunnels to be bound is
calculated. For example:

 The line rate of the port on the EFS4 board is 100Mbit/s. When the
transmit/receive frame length of the EPL service is 1518 Bytes, if the
throughput reaches 100%, what is the number of VC-12 virtual channels to
be bound?

 Maximum line-rate frame rate N= 100Mbps/ ((1518+8+12) ×8)


=8127.44. The rounded number is 8128 frame/s.

 The actual frame length of the board is 1518+4+12=1534 Bytes.

 The C12 container bandwidth is 2176kbps.

 Number of VC-12 virtual paths = (Frame length of the board ×8×


Maximum frame rate) / Container bandwidth = (1534 ×8× 8128)
/2176000=45.8. The rounded number is 46.

 It can be seen that 46 VC-12 virtual paths are required for transparent
transmission of 100% throughput.
 If the formula on the previous page is used for calculation, it can be concluded the
throughput of the following situation is 50.8%: EFS4 board bound with one VC-3,
and the EPL service with the frame length of 64 Bytes (the port rate is 100 Mbit/s) ,
which is very near the actual test value. If the difference is too large, check whether
the entire test system exists some problems, for example: Bit errors occur on the
SDH side, the data analyzer is set incorrectly, or the Ethernet board is faulty. The
theoretical throughput is calculated as follows:

 C3 container bandwidth =9×84×8×8000=48384Kbps.

 Maximum line-rate frame rate N= 100Mbps/ ((64+8+12) ×8) =148809.52.


The rounded number is 148810 frame/s.

 EFS4 Frame length L=64+4 (VLAN tag) +12 (GFP header) =80 Bytes.

 Final throughput = (48384Kbps/ (148810 frame/s×80 Bytes×8 bit)) ×100%=


50.8%.

 All the test results in this document and the test result of the SmartApplication are
tested by Smartbits600.
 Throughput and latency are two important indicators in Ethernet performance. The
throughput reflects the forwarding capability of the system based on the number
of forwarded frames, and the latency reflects the forwarding speed of the system,
therefore, even if the throughput is 100%, the performance of the device cannot
be asserted blindly.

 During the latency test, perform a throughput test ensure that the transmit rate of
the data analyzer is less than or equal to the throughput. This ensures that no
frame loss occurs during frame transmission. It is recommended that the test rate
be equal to the throughput. In this way, the latency in the worst case can be
measured.

 If the latency is too long, the service quality of the Ethernet is reduced and the
quality of the upper layer services is degraded. For example: The voice quality of
the VoIP service decreases, and the Internet access speed decreases.

 The latency can also be used to measure the time when an Ethernet frame passes
through a network to determine whether the network supports services.
 For the latency of storage and forwarding device:

 Start time: Last bit of the frame enters the device.

 End time: First bit of the frame exits the device.

 Latency= End time- Start time.

 Indicates the performance (packet forwarding speed) of a network device.


Generally, it is not compared with the bit forwarding device.
 For the latency of the bit forwarding device:

 Start time: First bit of the frame enters the device.

 End time: First bit of the frame exits the device.

 Latency= End time -Start time.

 Indicates the performance (packet forwarding speed) of a network device.


Generally, it is not compared with the storage forwarding device.
 In the above table, CT is the test value under Cut-Through, and S&F is the test
value under Store Forward.

 Generally, latency test is used to test the latency of the network.

 During the latency test, the frame transmission rate should be smaller than the
throughput.

 The working mode of the Data Analyzer can be configured. Huawei OptiX MSTP
devices are all storage and forwarding devices. Therefore, we only need to pay
attention to the result of the S&F latency test.
 The frame loss rate test is used to measure the response of a device or network in
the case of overload. In real-time applications, a large number of frame losses can
quickly reduce the service quality and network efficiency.

 In the formula, Xmax is the maximum line-rate frame rate of the device interface,
and Y is the actual frame rate.
 Back-to-Back can be used to evaluate the cache capability of a device.

 First, send a data frame with a minimum packet interval and a fixed length
(line rate frame rate) to the tested device.

 If any frame is lost, reduce the burst size of the data frame (the burst size
refers to the number of Ethernet frames) and repeat the test.

 Finally, the maximum burst size (quantity) that can be processed by the
tested device is obtained.

 The Data Analyzer always transmits data frames in the minimum packet
interval with the fixed length. Therefore, the Data Analyzer controls the burst
size of the data frame by prolonging or shortening the sending time of the
data frame.

 When testing Back-to-Back, the test is valid only when the throughput is lower
than the line rate of the interface. when the throughput is equal to the line rate of
the interface, all data frames can pass through the tested device. The burst length
is not limited on the network, as a result, packet loss cannot be displayed, and the
buffer function cannot be reflected. Therefore, when testing the Back-to-Back, you
need to set the throughput to a smaller value. In this way, the result is more
accurate.
 Can this table be used to calculate the maximum duration of no frame loss for
each typical packet length in the case of the EFS4 board in this configuration?

 The 64 Bytes frame length is used as an example to calculate the maximum


duration of no packet loss.

 The Data Analyzer controls the burst size of the data frame by prolonging or
shortening the data frame sending time. The table in the slide shows that the
device can receive 4333 Ethernet frames of 64 Bytes length without packet
loss, at the line speed, the interface can receive 148810 frames every second
(the rate of the EFS4 interface is 100Mbit/s). therefore, 1/148810 is the time
required for receiving one frame. therefore, the time required for receiving
4333 Ethernet frames is calculated as follows: 4333× (1/148810) seconds.
 Answer: According to the formula: N=Rate/ ((L+7+1+12) ×8, the maximum frame
rate is 84460 frame/s. After QinQ is enabled, the formula is as follows: L+4 (C-
VLAN) +4 (S-VLAN) +12 (GFP-F encapsulation byte). The actual frame length of the
signal on the board is 148 Bytes.
 Test Preparations

 Select ports with different functions on the data analyzer according to


different test requirements. For example: FE electrical port, FE optical port, GE
optical port, and GE electrical port.

 The Ethernet services of the MSTP equipment are finally carried by the SDH
network. Therefore, the performance of all the SDH networks directly affects
the test results of the Ethernet services. Therefore, before the test, ensure
that no abnormal alarms and events are reported on the SDH network.

 Check whether the working modes of the ports interconnected by the data
analyzer and equipment are consistent. If the working modes are inconsistent,
packet loss may occur.

 For the Ethernet service test, the Ethernet frames sent by the data analyzer
are standard Ethernet frames, that is, frames without VLAN tags.

 Finally, disable the flow control function of all ports to eliminate the impact
of the flow control function on the indicator test.
 Test Environment Introduction

 Configure two EPL services with the same bandwidth between the two tested
sites (port 1 at one end corresponds to port 1 at the other end and port 2
corresponds to port 2 at the other end). The bandwidth and channel level at
the SDH side are allocated according to requirements.

 After the configuration is complete, connect the data analyzer to ports 1 and
2 of a site, and perform loopback on port 1 and port 2 of the other site.

 The data analyzer sends data frames (standard Ethernet frames) to port 1 of
the connected site. The data frames are recycled through port 2 of the site. In
this way, the unidirectional system throughput, frame loss rate, and latency
are tested.

 Generally, the test duration is 60 seconds, and the number of test times is 1.
You can set the test duration to 10 seconds or 3 seconds according to the
actual situation. In addition, you only need to test the value of the typical
packet length.

 Compare the obtained values with the technical specifications provided by


the vendor to check whether the Ethernet performance meets the
requirements.
 Test Environment Introduction

 Use two data analyzers to connect port 1 of the Ethernet boards at the two
sites.

 Configure an EPL service that uses port 1 at two sites as the source and sink.

 Use the two data analyzers to send data frames at the same rate. Observe the
transmit and receive status of the two analyzers.

 If the number of frames received at one end is equal to the number of


frames sent by the other end, no frame is lost.

 If the number of frames received at one end is not equal to the number
of frames sent by the other end, frame loss occurs in the system. In this
case, find out the cause.

 In this test method, you do not need to synchronize two data analyzers, you
can still test whether packet loss occurs.

 The test of Ethernet services is flexible. After learning the basic knowledge and test
methods in this document, you need to creatively study in the work and apply
them to daily work.
 The optical transport network consists of optical transmission equipment and
optical fibers. Most of the faults on the network are caused by the faults of certain
NEs or fibers on the network. Therefore, how to locate to a single NE in a specific
area is the key to locating the fault, that is, locate the fault accurately to a single
site.

 In addition, the distance between sites is generally dozens or even hundreds of


kilometers. Before locating a fault to a single site, it is meaningless to suspect one
site is faulty. This wastes precious time.
 The general principles for locating faults are as follows:

 Locate the external first, and then the transmission.

 When locating a fault, you need to clear the external factors, such as the
fiber cut, the interconnected device fault, or the power supply fault.

 Locate the network first, and then the NE.

 When locating a fault, you need to locate the site where the fault occurs.

 Exclude the high-speed part first, and then the low-speed part.

 According to the alarm signal flow, the alarm of the high-speed signal
often causes the alarm of the low-speed signal. Therefore, when
locating a fault, you need to rectify the fault of the high-speed part first.

 Analyze high-level alarms first, and then low-level alarms.

 When analyzing alarms, analyze the alarms of higher severity, such as


critical alarms and major alarms. Then, analyze low-level alarms, such as
minor alarms and warning alarms. This principle can be used together
with the “high-speed part first and then the low-speed part”. During
troubleshooting, handle the high-level alarms of the line first, then the
low-level alarms of the line, and then check whether alarms are
reported on the tributary, if the alarm persists, analyze the tributary
alarms according to this principle.

 The principles for handling faults are not invariable in the order of use. They can
be flexibly applied according to the live network.
 The common methods for locating faults are summarized as follows: First Analysis,
Second Loopback, Third Replace Boards.

 When a fault occurs, first analyze the alarm, performance event, and service flow to
determine the fault scope. Then, you can perform a segment-by-segment
loopback to rectify the external fault or locate the fault to a single NE or board. At
last, replace the faulty board and rectify the fault.
 Alarm masking: This command is used to mask all alarms of an NE or a board
of an NE. If an alarm is masked, the corresponding NE or board does not
monitor the alarm.

 Automatic alarm reporting: After automatic alarm reporting is set, when an


alarm is generated on the device side, the alarm is reported to the NMS
immediately. The alarm board is displayed on the NMS. You can view the alarm
information on the alarm board. You do not need to query the alarm
information. The unnecessary alarms can be set to not automatically reported.

 Alarm filtering: It is implemented on NMS and does not affect NE alarms.


Receive or discard the reported alarms based on the alarm filtering
configuration. Alarm filtering is configured for an NE. If the alarm filtering
status is Enabled, the NMS discards the alarm and does not record it in the
NMS database. If the alarm is disabled, the alarm is received and recorded in
the NMS alarm database.

 Alarm suppression: The root alarms that are directly triggered by abnormal
events or faults often generate low-level alarms, which interfere with the
locating and handling of alarms. You can mask non-root alarms by setting
alarm correlation rules in advance.
 The frame structure of SDH signals defines rich overhead bytes that contain
system alarms and performance information. Therefore, when a fault occurs in the
SDH system, a large number of alarms and performance events are generated. By
analyzing the information, you can determine the type and location of the fault.

 You can obtain alarm and performance event information in either of the following
ways:

 You can query the current or historical alarms and performance events of the
transmission system through the NMS.

 You can learn about the running status of the device by observing the status
of the running indicators and alarm indicators of the cabinet and boards.

 Limitations of Alarm and Performance Analysis

 When the networking, services, and fault information is complex, a large


number of alarms and performance events may occur when faults occur.
There are too many alarms and performance events, which makes it difficult
for maintenance personnel to analyze.

 When certain faults occur, there may be no obvious alarms or performance


events. Sometimes, no alarms or performance events can be found. In this
case, the alarm and performance event analysis method is helpless.
 When obtaining the alarm or performance information through the NMS, make
sure that the current running time of all NEs on the network are set correctly. If the
NE time is set incorrectly, the alarm or performance information may be reported
incorrectly or not reported at all. During maintenance, set the NE time to the
current time after you redeliver the configuration to the NE. Otherwise, the NE will
work in the default time rather than the current time.
 In this example, many alarms are reported on the entire network. It is difficult for
beginners to analyze the alarms. However, as long as we grasp the key and basic
principles of troubleshooting, it is not difficult to analyze the root alarms.

 It can be seen that R_LOS, MS_RDI, and HP_RDI are line alarms, while LP_RDI and
TU_AIS are tributary alarms. According to the principle that “Exclude the high-
speed part first, and then the low-speed part”, the line alarm is analyzed first. The
RDI alarm is a secondary alarm and a minor alarm. The R_LOS alarm is a critical
alarm. According to the principle that “Analyze high-level alarms first, and then
low-level alarms”, the root alarm of the entire network is R_LOS, it indicates that
the optical path from A to B is interrupted.

 In the process of handling transmission faults, there are multiple fault points.
Therefore, it is not determined that the faults on the network are caused by an
alarm. As shown in the example, R_LOS is the root alarm. After the R_LOS alarm is
cleared, if other alarms persist, further analysis is required.
 The detection mechanism of B2_SD/B3_SD/BIP_SD is the same as that of B1_SD.
The only difference is that the detected byte changes to B2/B3/V5 bytes.

 The detection mechanism of B2_EXC/BIP_EXC is the same as that of B1_EXC. The


only difference is that the detected byte changes to B2/V5 bytes.
 The function of LP_RDI/MS_RDI is similar to that of HP_RDI. The only difference is
that the monitoring levels are different.

 The IN_PWR_LOW is an alarm indicating that the input optical power is too low.
This alarm is reported when the board detects that the actual input optical power
is lower than the lower threshold of the input reference value.
 The configuration data includes the following items:

 Node parameters of the multiplex section

 Path loopback setting for the line and tributary boards

 Tributary path protection attributes

 Path trace byte

 For example, if the SNCP protection of a certain tributary board does not work,
query whether the path attribute of the tributary board is set to "protection".

 You can view the operation log on the NM to check whether any improper
operation on the NM is performed.

 This method is applicable to further analysis of a known faulty NE and helps to find
the root cause of a fault. This method, however, takes a long time and requires
expertise in the field of optical transmission and essential product knowledge.
 The path trace byte J1 and the signal label byte C2 belong to the higher order path
overhead. If the received bytes and to be received bytes are inconsistent, the
HP_TIM (J1 byte mismatch) and HP_SLM (C2 byte mismatch) alarms are reported
at the receive end, although two alarms are minor alarms for Huawei equipment,
this alarm does not affect services but affects the judgment of network faults.
Therefore, you need to handle the alarm in a timely manner.
 For HP_TIM (J1 byte mismatch) and HP_SLM (C2 byte mismatch) alarms, exclude
the cause that the receive end cannot receive normal J1 and C2 values due to bit
errors. Most of the causes are that the to be received and to be transmitted
contents of the equipment at both ends are inconsistent. In this example, set the
contents to be transmitted on NE1 as the contents to be received on NE3, and set
the contents to be transmitted on NE3 as the contents to be received on NE1.
 For MSTP equipment, the pass-through mode is used for VC-4 level services
by default, and the termination mode is used for VC12/VC3 level services by
default.
 Overhead detection: The overhead is generated at the transmit end, and the
overhead detection is completed at the receive end. The line board extracts
the overhead bytes and performs related processing or reports related
alarms according to the extracted values.
 Overhead termination: When the transmission equipment detects the
overhead, it marks the overhead as its default value for transmission. The
overhead that is sent to the peer device is regenerated (or default).
 Overhead pass-through: After detecting the received overhead, the
transmission equipment directly forwards the received overhead to the
opposite station without changing its value.
 Use this method to modify the timeslot configuration, slot configuration, and
board parameter configuration. This method also removes problems caused by
configuration errors in a known faulty NE. In addition, this method is used to
troubleshoot pointer justifications.

 Application

 If some path of a tributary board or some tributary board is suspected to be


faulty, modify the timeslot configuration to shift the payload to other path or
tributary board.

 If a certain slot is suspected to be faulty, change the slot configuration.

 If a VC-4 is suspected to be faulty, shift the traffic to another VC-4.

 During the upgrade or expansion, if you are unsure about the new
configuration, you can re-load the original configuration for confirmation.

 Modifying the timeslot configuration, however, does not help to locate the
faulty point or faulty board, for example, a line board, tributary board, cross-
connect board or backplane.

 In this case, use the replacement method or the loopback method to further
locate the fault. This method is applicable in the preliminary process of
locating faults when spare boards are not available. Other service channels or
slots are used to resume the service temporarily.

 To modify the configuration in case of pointer justification, modify the


tracing direction and the reference source of the clock.
 When using this method, you need to analyze the current alarm information first
to confirm the affected service channels. Therefore, these methods of
troubleshooting are not independent of each other but can be flexibly used in the
specific fault environment.

 In this example, according to the analysis of alarms and service configurations,


only the services from NE1 to NE3 are affected. No alarm is reported on the line.
The TU_AIS and LP_RDI alarms are reported on the tributary. This indicates that the
services from NE1 to NE3 are faulty, therefore, the fault may be caused by the PQ1
board in slot 3 of NE1 or the PQ1 board of NE3 corresponding to NE1 (the line and
cross-connection of NE1, NE2, and NE3 cannot be excluded), the most probable
cause is the tributary boards at both ends of the service.
 The process is as follows:

 Select a 2 Mbit/s channel from the faulty channels in slot 3 of NE1 and
configure a service from this channel to NE2. The second VC-4 is used on the
line. (You do not need to modify the cross-connection of NE1, but delete the
pass-through service on NE2, set up the cross-connection to add/drop
services).

 Check the alarm status of the new service.

 If no alarm is reported, locate the fault on NE3. You can further locate
the faulty board by performing TPS switching and switching between
the active and standby cross-connect boards.

 If the TU_AIS alarm is generated on NE2, locate the fault on NE1. You
can further determine the fault point through loopback.
 Loopback is the most popular and effective method to locate faults on OptiX
series equipment. The most significant feature of this method is that there is no
need of thorough data and performance analysis. Hence, the equipment
maintenance personnel are expected to be familiar with this method. However, the
loopback method may cause temporary interruption of normal services in the
loopback channel. This is the biggest disadvantage of this method. Therefore, you
can use the loopback method to rectify the fault only when a major fault, such as
service interruption occurs.

 Loopback can be classified into the following types:

 According to the signal flow direction, there are two types of loopback:
Inloop and outloop. When the inloop is performed on the local NE, the
signals are transmitted to the NE. When the outloop is performed, the signals
are transmitted from the local NE to the external NE. It can be said that the
inloop is oriented to the local NE, while the outloop is oriented to the
opposite end. For example, to test whether the interface module and external
cable of the board are normal, you need to set outloop. To test whether the
cross-connect unit and service path of the equipment are normal, you need
to set the inloop.
 In the preceding figure, W2:17 indicates that the west optical port uses the
seventeenth VC-12 of the second VC-4. t2:1 indicates that port 1 on the second
tributary board is used.
 Perform loopbacks segment by segment to locate the faulty site. The detailed
operations are as follows:
 In the direction of the BER tester, perform the software outloop on the e2 of
NE2. If the BER tester displays interruption, the fault is located on NE3 and
the east side line board of NE2. If the BER tester is normal, go to the next
step.
 Remove the loopback. Perform a software inloop on the w2 of NE2. If the
BER tester displays interruption, the fault occurs on NE2, including the east
side and west side boards of NE2. If the BER tester is normal, the fault occurs
on NE1 and the west side line board of NE2.
 Remove the loopback.
 Generally, services with problems are associated with each other. Therefore, if one
of the services is restored, other services can be automatically restored. In addition,
the simplified sampling method often makes fault analysis and processing more
clear and simple. Especially when the faulty services are complex, the sampling
simplification method is more effective, or even the starting point or breakthrough
point of fault locating.
 You can perform loopback to quickly locate a fault to a station or even to a board
without spending excessive time on analyzing the alarm or performance events.
The operations are simple and can be easily mastered by the maintenance
personnel.
 If any other services are normal in the loopback path, however, the loopback
method may cause transient service interruption. You can adopt the loopback
method to handle a fault only in the case of a critical accident. For example, the
services are interrupted.
 Replacement is applicable when you handle problems of the external equipment,
such as an optical fiber, trunk cable, switch, and power supply equipment. In
addition, the replacement method is also used to remove the problem in the
board at a single NE.

 This method is practical and is easy for the maintenance personnel to understand.
This method, however, requires spare parts. Extra care is required during its
application. Improper handling of the board or component during the replacement
may cause damage and even result in a new fault.

 Notice: The replacement method is not a narrow concept. In the case of an NE


configured with TPS protection, active/standby 1+1 cross-connection protection,
and a network with self-healing protection, switching can be considered as a
replacement method.
 According to the fault description, loopback is performed on the 40th 2 Mbit/s
channel through the NMS. The loopback is performed on the 2M processing board.
The interface board and PDH interface unit on the processing board are not
included, therefore, the fault location cannot be further determined. After the DDF
is looped back to the transmission side, the fault persists. Therefore, it can be
determined that the fault occurs between the DDF and the 2M processing board.
 This method is used to clear the external faults or to locate the problem related to
interconnection.

 Application

 To check whether the voltage of the power supply is too high or too low, use
a multimeter to measure the input voltage.

 To check whether the poor interconnection between the equipment and


other equipment is due to the grounding, use a multimeter to measure the
voltage between the shielding layer of the coaxial ports of the transmitter
and receiver of the interconnecting path. If the voltage is more than 0.5 V, it
indicates that the grounding is improper.

 To check whether the poor interconnection is caused by a wrong signal, use


proper analyzers to check whether the frame signals and the overhead bytes
are normal, and whether there are any abnormal alarms.

 This method provides accurate results, but it depends on the accuracy of the
meters and the professional knowledge of the personnel.
 It can be seen from this example that, for transmission faults, alarms generated at
a certain point cannot be limited to only one site. Instead, they need to be
analyzed and handled at the network layer based on upstream and downstream
sites, this requires the maintenance personnel to have an overall concept of the
network they maintain.
 The PRBS test for lower order services, conducted by tributary boards, is used to
monitor the lower order service channels of all the boards in a system. The PRBS
test for higher order services, performed by cross-connect or line boards, can only
monitor STM-1 channels.
 The RMON feature defines Ethernet performance management methods based on
the management information base (MIB) in the simple network management
protocol (SNMP) architecture. By using the RMON function, you can monitor the
performance of Ethernet ports in the same manner as you monitor the
performance of SDH or PDH ports.

 The RMON function defines a serial of statistic formats and functions to realize the
data exchange between the control stations and detection sites that comply with
the RMON standards. To meet the requirements of different networks, the RMON
function provides flexible detection modes and control mechanism. In addition,
the RMON function provides error diagnosis, and planning and information
receiving of the performance events of the entire network.
 Sometimes, a running board enters the abnormal state due to transient power
supply behavior, low voltage or strong external electromagnetic interference.
Service interruption and ECC communication interruption may or may not be
accompanied with corresponding alarms. The configuration data may also be
correct. In this case, the fault can be cleared and the normal service can be
resumed in time by resetting the board, restarting the site, re-sending the
configuration or switching the service to the standby path.

 The major disadvantage of this method is the uncertainty involved, because the
problem is not completely known and the alarm may persist after the board or
power is reset. It is recommended that you do not use this method. Commonly,
you can apply the previous methods to locate faults or obtain technical support
through the authorized channels.
 The common methods used in fault locating have different features. The above
table lists the mapping between fault locating methods.

 In actual applications, maintenance personnel need to use various methods to


locate and rectify faults.
 Answer:

 The configuration data analysis method is used to locate the fault to the
board and find out the cause of the fault. It takes a long time to locate the
fault.

 The alarm and performance event analysis method is universal and can be
used on the entire network to foresee potential equipment risks. Services are
not affected.

 The configuration modification method is used to locate the fault to the


board, rectify the pointer justification problem. It is complex.

 The PRBS test method is used to locate the fault to the board or cross-
connect chip. Meter-free test is performed to monitor all the lower order
paths.

 The meter test method is used to separate external faults and solve
interconnection problems, which is persuasive. There is a requirement for the
meter.

 The RMON performance analysis method is used to locate the fault to the
network segment. In this way, the Ethernet port can be managed, the
detection mechanism is flexible, and the network-wide error diagnosis is
provided.

 The loopback method is used to locate a fault to a single site or to separate


an external fault, it doesn’t depend on the analysis of alarm or performance
event. ECC and normal services may be affected.

 The replacement method is used to locate a fault to a board or to


separate an external fault. Spare parts are required.

 The experience processing method is used in some special cases and is


not used as much as possible.
 Other fault types of the MSTP equipment: NEs are unreachable, communication
between boards fails, services are transiently interrupted, and pointer justification
occurs.
 For the troubleshooting of the transmission equipment, the handling process is
similar regardless of the type of the fault. That is, exclude external faults first, then
locate the fault to a single site, and then locate the board fault, finally, the fault is
rectified.
 For service interruption faults, external causes must be excluded first. For example:
The subrack is powered off due to short circuit, overvoltage, under voltage, power
connector loosening, PIU board failure, or misoperation, abnormal joint grounding,
abnormal ambient temperature and humidity, external strong interference source,
mouse, optical cable interruption or splicing error, and the attenuation of optical
cable or the flange is too large, the cable connector is loose, or the interface board
is in poor contact.

 After the external causes are excluded, check whether the current configuration
data is correct and whether misoperations are performed, including:

 The cross-connection is incorrect.

 The enabled services are configured with hardware or software loopback or


the services are not loaded. If a loopback is configured on the tributary or
line, services will be interrupted. In this case, you need to remove the
software or hardware loopback on the NMS or the device.

 If the service is not loaded on the NMS, the service will also be unavailable. In
this case, you need to change Service Unload to Service Load on the NMS.

 If there is no external or manual fault, you can infer that the fault is caused by the
device itself. In this case, you can locate the fault to a single site or board and
rectify the fault according to the methods described in chapter 2.
 To locate service interruption faults, perform the following steps:

 Alarm analysis

 When a service interruption occurs, the device usually reports some


alarms. By analyzing these alarms, you can locate the fault in a fine and
accurate degree.

 Segment-by-segment loopback

 If the fault point cannot be located by observing the alarm and


performance event, you can use the loopback method to rectify the
fault. During the loopback, the faulty channel needs to be sampled. The
maintenance personnel must be familiar with the service distribution on
the network.

 Replacement method and Configuration Modification method


 Case Introduction:

 NE1 is the central node. That is, other NEs have services with NE1, and other
NEs do not have services for each other.

 Observe the entire network. It is found that there are only two alarms (TU_AIS
alarm is reported on NE4 and the LP_RDI alarm is reported on NE1).
According to the analysis of alarms and service configurations, the services
sent from NE1 to NE4 are faulty. As a maintenance engineer, it is suspected
that the fault occurs on the entire link from NE1 to NE4, but the fault cannot
be located immediately. You can perform a loopback to locate the fault.

 In addition, according to the experience, since only the services from NE1 to
NE4 are faulty, the services between NE1, NE2, and NE3 are normal.
Therefore, it is suspected that the optical line and the interconnected optical
board from NE3 to NE4 are faulty.
 The procedures for analyzing alarms are as follows:

 Query the alarm time of NE1 and NE4. It is found that the TU_AIS report time
of NE4 is earlier than that of the LP_RDI of NE1. In this case,
________________________.

 If the services of NE2, NE3, and NE4 are transmitted through different VC-4s,
you can use the __________ method to locate the fault.

 If the services of NE2, NE3, and NE4 are transmitted through same VC-4, the
loopback cannot be performed on NE2 and NE3. In this case, you need to use
the ______________ method to locate the fault.

 Answer:

 The alarm on NE1 is caused by NE4

 Loopback

 Configuration Modification
 To locate the fault, use the loopback method first. On NE1, use the BER tester to
test the faulty channel. Because the services between NE1 and NE2 and between
NE1 and NE3 are normal, perform loopbacks segment by segment on NE3 and
NE4, locate the fault to a single site. Pay attention to the following points when
performing a loopback:

 Try to use the VC-4 loopback instead of the optical/electrical port loopback
to minimize the impact of the loopback on normal services.

 If the services from NE1 to NE4 use the same VC-4, you are advised to
modify the service configuration and then perform the loopback. If you do
not modify the service configuration, you can only perform a loopback on
the east optical board of NE3. Then, you can determine whether the fault is
caused NE3 or NE4.

 In the case of segment-by-segment loopback, only one loopback can be


performed at a time, and multiple loopbacks cannot be performed at the
same time. After the loopback is performed, you need to cancel the loopback
operation to reduce misoperations.
 The principle of configuring the VC-12 loopback service is similar to that of the
VC-4 loopback service.

 Select a 2M service from the interrupted services as the observation object.


Connect the BER tester to the 2M channel of NE1.

 On the NMS, delete the 2M service from NE1, and then configure a VC-12
loopback service. Note that the source and sink timeslots of the service are
the same, as shown in the table.

 Check whether the TU_AIS alarm is generated on the BER tester. If yes, it
indicates that the problem occurs on NE1. If not, perform the same
configuration on the downstream sites such as NE2.

 Restore the deleted services on NE1, and configure a VC-12 loopback service
on NE3. The source and sink timeslots are shown in the above table. Observe
the alarm indication on the BER tester.

 In this way, the fault can also be located to achieve the effect similar with the
VC-4 loopback. Meanwhile, the entire VC-4 loopback is avoided and other
normal services are not affected.

 After the fault is rectified, delete the VC-12 loopback services and restore the
original services in time. Otherwise, the timeslots may be occupied.
 Assume that according to the analysis, NE4 is faulty. Although the alarm is
reported by the tributary board, the fault may be caused by the tributary board,
line board, or cross-connect board. Therefore, how to locate the fault to the board
is the key point.
 After locating the fault to the NE, you need to further locate the faulty board. The
procedure are as follows:
 First, check whether the active and standby boards are faulty. If yes, use the
TPS protection switching to switch the services to the protection board.
Observe the alarm. If the alarm is cleared, the tributary board is faulty.
Replace the tributary board according to the operation specifications. If the
alarm persists, perform the active/standby switching on the cross-connect
board and observe the alarm. If the alarm is cleared, the cross-connect board
is faulty. Replace the cross-connect board according to the operation
specifications.
 After the fault of the tributary board and the cross-connect board is excluded,
only the line board left. In addition, the line board does not have the
hardware-level backup relationship. Therefore, you can only replace the
hardware to locate the fault. When inserting or removing a line board, do not
insert or remove the board with fibers. Otherwise, the optical fiber may be
stretched or even broken.
 Restore the environment and record the handling result.
 After the fault point is located to the NE, if the NE does not have the TPS or the
active/standby cross-connect configuration, you can replace the faulty board only
with the standby board, the replace order is the tributary board first, then the line
board, and then the cross-connect board.
 Case introduction: The MSP is mainly used at the convergence layer and core layer
of the network. The convergence layer and core layer have high service
requirements and high security requirements. Therefore, when a fault occurs, how
to quickly locate and rectify the fault is the most important concern of
maintenance personnel. This case describes how to troubleshoot the fault that
services are unavailable after the MSP switching.
 To rectify this type of fault, perform the following steps:

 When an alarm is generated on the entire network, use these principles


“Exclude the high-speed part first, and then the low-speed part; Analyze the
high-level alarm first, and then the low-level alarm” to find out the most
probable root alarms. In this case, NE2 and NE3 report the R_LOS alarm
triggers the MSP switching. The switching only affects the services between
NE1 and NE3. The tributaries of NE1 and NE3 report the TU_AIS alarm,
indicating that the switching fails and the services are interrupted,

 Query the enabling status of the APS protocol on the entire network. If the
APS protocol is not enabled for some NEs, enable the APS protocol first. If
APS protocols of all NEs are enabled, query the switching status of all NEs.
The west optical board of NE3 and the east optical board of NE2 should be in
the switching state. The east optical board of NE3 and the west optical board
of NE2 should be in the normal state. Other NEs must be in the pass-through
state. If the status is abnormal, you can restart the APS protocol on the entire
network and re-deliver the MSP node parameters. After the protocol on the
entire network is started, query the switching status and network-wide alarms
again. In this example, assume that the switching is normal, however, the
alarm persists.

 After confirming that the switching status of the entire network is normal, it
can be determined that the fault is caused by the equipment. Analyze the
path of the faulty channel after the switching and perform the loopback by
segment. Then, locate the fault to the single site and even board according to
the principle “single site first, and then board”.
 To perform a loopback on a two-fiber bidirectional MSP ring, perform the
following steps:

 Step 1: Sampling of interrupted service channels.

 Select the first 2 Mbit/s channel of the second tributary board of NE1,
that is, the t2:1 of NE1. And the first 2 Mbit/s channel of the second
tributary board of NE3, that is, the t2:1 of NE3.

 Step 2: Draw the path diagram of the interrupted service.

 The following figure shows the paths before and after protection
switching. Note that the ring rate is STM-16. Therefore, services in the
third VC-4 are bridged to the eleventh VC-4 in the other direction after
the switching.

 Step 3: Perform loopbacks segment by segment to locate the faulty site.

 Connect a meter to the first 2M channel of the second tributary board


of NE1 (You can also check whether the TU_AIS alarm of the 2M channel
of the second tributary board of NE1 is cleared through the NMS), then
configure a VC-4 loopback service segment by segment on the NMS to
locate the fault.

 After locating the fault to a single site, use the method described previously to
locate the fault to the board.
 For bit error faults, the external causes must be excluded first.

 Optical power problem: Check whether the transmit optical power of the
upstream site is normal. Check whether the ports on the ODF, attenuator,
flange, and optical interface board are tightly connected. Check whether the
ports on the ODF, attenuator, flange, and optical interface board are clean.
Check whether the optical fiber is squeezed. Check whether the bending
radius of the optical fiber is too small. Check whether the type of the optical
interface board is consistent.

 Grounding problems: The PGND and BGND are not properly grounded. The
grounding resistance is greater than 2 Ω. The potential difference between
BGND and PGND is greater than 0.5V. The PGND, BGND, and AC neutral wire
share the ground. The PGND cables of the two interconnected devices are
not jointly grounded.

 Ambient temperature: The fan in the subrack is faulty. The air filter of the
subrack has excessive dust, and the ventilation of the equipment is poor. The
air conditioner in the equipment room is faulty. Check whether the
interconnected devices share the same ground.

 Long-term operation: 0°C~45°C

 Short-term operation (no more than 96 hours for continuous working


and no more than 15 days for each year): -5°C~55°C
 B1, B2, B3, and V5 bit errors are monitored separately in the RST (Regenerator
Section Termination), MST (Multiplex Section Termination), HPT (Higher Order
Path Termination), and LPT (Lower Order Path Termination). If only the lower order
path has bit errors, the higher order path, multiplex section, and regenerator
section cannot detect the bit error. If there are bit errors in the regenerator section,
bit errors will also occur in the multiplex section, higher order path, and lower
order path. Generally speaking, there are lower order bit errors if there are higher
order bit errors. For example, if there are B1 bit errors, there are B2, B3, and V5 bit
errors. If there are lower order bit errors, there may be no higher order bit errors. If
there are V5 bit errors, B3, B2, and B1 bit errors may not occur. When the local end
of the optical synchronization transmission system detects bit errors, the local end
notifies the opposite end of the bit error detection through the overhead byte
except the alarm or performance event reported by the local end.

 The abnormal optical power of the line board is a common cause of bit errors.
When the optical power is too high or too small, the receive optical module cannot
receive optical signals normally, the B1, B2, B3, and V5 bit errors occur. Therefore,
when the equipment reports a large number of bit errors, check whether the
receive and transmit optical power is normal.
 The processing of bit errors can also be performed according to the principle that
“exclude the high-speed part first, and then the low-speed part”. The bit errors
of the line often cause the tributary to report the bit errors. The difference is that
the bit error alarm is not analyzed, it is the analysis of the bit error performance
events. For bit error performance events, you need not only pay attention to the
location where they are reported, but also pay attention to the specific values of
performance events.

 If the local end reports the BBE performance event, it indicates that the local
receive end detects bit errors and the channel between the remote end and
the local end is faulty.

 If the local end reports the FEBBE performance event, it indicates that the
remote receive end detects bit errors and the channel between the local end
and the remote end is faulty.

 In this example, although a large number of performance events are reported on


the entire network, all the performance events are aggregated on the service
channels between NE1 and NE4. According to the principle that “exclude the
high-speed part first, and then the low-speed part“, the bit error performance
events between NE3 and NE4 need to be processed first, in addition, the FEBBE
alarm is reported with the occurrence of the BBE alarm. Therefore, it can be
determined that the east optical board of NE3 detects the bit error block and
reports the RS_BBE, MS_BBE, and HP_BBE alarms, and cause other NEs report bit
error performance events.
 The analysis contents of the Alarm and Performance Event Analysis includes:

 Query the alarm information and performance events through the NMS, such
as BBE, FEBBE, B1_OVER, B2_OVER, B3_OVER, BIP-EXC, SD, and SF. Whether
there is a notification relationship between the alarm and the performance
event.

 Check the NEs, boards, and channels where the alarm or performance event
occurs. It is recommended that you provide the path diagram of the service
channel affected by bit errors, especially the service path diagram where
service interruption occurs.

 For the alarm/performance events reported by the NMS, you need to query
the time when the alarms are reported, especially the transient service
interruption caused by excessive transient bit errors. In this case, you need to
query the generation time of the alarms/performance events to determine
whether the transient service interruption is caused by the temperature
change.

 During the actual operation of the device, a small number of bit errors are
generated due to various factors, but services are not affected. Services are
affected only when the number of reported bit error blocks reaches a certain
value. For example, the correlation between the number of bit error blocks
and the TU_AIS alarm, when the number of bit errors reaches the threshold, a
TU_AIS transient interruption is reported. Compare the number of bit error
blocks reported by different service channels, analyze the paths of different
service channels, and then analyze all the sites in the path.
 Check whether the transmit optical power of NE4 is normal. If not, replace
the board. Note that bit error performance events do not interrupt services.
Therefore, do not replace boards during peak hours.

 If the transmit optical power of NE4 is normal, check whether the receive
optical power of NE3 is within the acceptable range. If not, replace the flange,
patch cord, and ODF ports of NE3 and NE4 to exclude possible problems.

 After the external fault is rectified, perform loopbacks segment by segment


to locate the fault to the NE and then to the board. Then, the fault is rectified.
 Answer: Query the enabling status of the APS protocol on the entire network. If
the APS protocol is not enabled for some NEs, enable the APS protocol first. If all
protocols are enabled, query the switching status of all NEs, including the
switching state, normal state, and pass-through state. If the APS protocol status is
abnormal, restart the APS protocol on the entire network and re-deliver the MSP
node parameters. After the protocol on the entire network is started, query the
switching status and network-wide alarms again. After confirming that the
switching status of the entire network is normal, it can be determined that the
equipment is faulty. Analyze the path of the faulty channel after the switching and
perform a loopback test segment by segment, locate the fault to a single site and
then locate to the board.
 Service configuration

 The traditional transmission network adopts the chain topology and ring
topology. During service configuration, services need to be configured one
by one. As the network scale expands and the network structure becomes
more and more complex, this service configuration mode cannot meet the
requirements of fast growing users.

 Bandwidth utilization

 Traditional SDH optical transmission networks have a large amount of


resources reserved and lack advanced service protection, and the restore and
routing functions.

 Protection scheme

 In the traditional SDH network, the main topologies are ring and chain, and
the main service protection scheme include SNCP and MSP. However, there
is no proper protection scheme for mesh network.
 With the emergence of ASON, the optical network has taken a solid step towards
the development of intelligent and fast development, which lays a solid foundation
for the continuous development of the entire communication network. The feature
of ASON is that it introduces the concept of signaling in the transmission network
for the first time, and integrates the advantages of data network and transmission
network management to realize real-time dynamic network management.
 Service Configuration

 The ASON successfully solves this problem by end-to-end service


configuration. To configure a service, you only need to specify its source
node, sink node, bandwidth requirement and protection type; the network
automatically performs the required operations.

 Bandwidth Utilization

 Traditional SDH optical transmission networks have a large amount of


resources reserved and lack advanced service protection, and the restore and
routing functions. In contrast, with the routing function the ASON can
provide protection by reserving fewer resources, therefore increasing
network resource utilization.

 Protection and Restoration

 Chain and ring are the main topologies used in a traditional SDH network.
MSP and SNCP are the main protection schemes for the services. In ASON,
mesh is the main topology. Besides MSP and SNCP protections, the dynamic
restoring function is available to restore the services dynamically. In addition,
when there are multiple failures in a network, the services can be restored as
many as possible.
 ASON NE
 An ASON NE is one of the topology components in the ASON. Compared
with a traditional NE, an ASON NE has the functions of link management,
signaling and routing.
 TE Link
 TE link is a traffic engineering link. The ASON NE sends its bandwidth
information to other ASON NEs through the TE link to provide data for route
computation.
 ASON Domain
 An ASON domain is a subset of a network, which is classified by function for
the purpose of route selection and management. An ASON domain consists
of several ASON NEs and TE links. One ASON NE belongs to one ASON
domain.
 SPC
 In the case of soft permanent connection (SPC), the connection between the
user and the transmission network is configured directly by the NM. The
connection within the transmission network, however, is requested by the
NM and then created by the NE's control plane through signaling. When
ASON service is mentioned, it usually refers to SPC. In an ASON, to create
ASON services is to create SPCs.
 The ASON has three planes: the control plane, the transport plane, and the
management plane.
 Control Plane
 The control plane consists of a group of communication entities. It is
responsible for the calling control and connection control.
 The control plane dynamically controls the transport plane through signaling
exchange. It sets up, releases, monitors, and maintains connections. In cases
of faults, the control plane restores the failed connections automatically
through signaling exchange.
 Transport Plane
 The traditional SDH network is the transport plane. It transmits optical signals.
 It transmits and multiplexes optical signals, configures cross-connection and
protection switching for optical signals, and guarantees the reliability of all
optical signals. The switching operations on the transport plane are
performed under the control of the management plane and control plane.
 Management Plane
 The management plane is a complement to the control plane. It maintains
the transport plane, the control plane and the whole system. It can configure
end-to-end services.
 Its functions include performance management, fault management,
configuration management and security management. The functions of the
management plane are coordinated with the functions of the control plane
and transport plane.
 Three interfaces: CCI interface between the control plane and service plane, NMI-T
interface between the management plane and service plane, and NMI-A interface
between the management plane and control plane.

 Generally, NMI-T and NMI-A are called NMI interfaces.


 SC:
 A switching connection is a service connection initiated by a terminal user to
a control plane of an ASON network, and the control plane establishes a
service connection by using signaling. For example, when a router device is
interconnected with a transmission device, the router device initiates a
service request, the ASON network responds to the request, and establishes
a required cross connection through signaling, which is a switching
connection.
 PC:
 A permanent connection is a service connection that is set up by the NMS
after the NMS calculates the cross-connection status and sends a command
to the device. For example, traditional SDH services are permanent
connections.
 SPC:
 The function is similar with the first two connection. In the case of soft
permanent connection (SPC), the connection between the user and the
transmission network is configured directly by the NM. The connection within
the transmission network, however, is requested by the NM and then created
by the NE's control plane through signaling. When ASON service is
mentioned, it usually refers to SPC. In an ASON, to create ASON services is to
create SPCs.
 The current ASON version does not support SCs.
 ASON Software
 The ASON software and NE software run on the SCC board, whereas the
board software and network management (NM) software run on the boards
and NM computer respectively, to implement corresponding functions.
ASON software is used mainly on the control plane, using Link Management
Protocol (LMP), OSPF-TE, and RSVP-TE.
 GMPLS developed from MPLS:

 MPLS is mainly used in Datacom network. The object of label mapping is data
packet. GMPLS provides generalized label mapping. Besides data packet,
GMPLS also can map TDM timeslot, wavelength etc. as labels. Hence GMPLS
can be used in SDH to realize ASON function.

 Bidirectional transportation: the uniform signaling is used to manage uplink


and downlink LSP (Label Switching Path) to reduce the service creation time
and signaling overhead.

 Enhanced signaling ability: the cross-connection is established by signaling


protocol. (RSVP-TE)

 Integrated routing ability to provide topology and resource discovery


function. (OSPF-TE)
 LMP is used as the link management protocol to discover neighbors, locate faults,
and manage link resources.

 OSPF is used as a routing protocol to spread, collect, and calculate route resources.
The control plane generates control link LSA information by sending Hello packets
between neighbors and floods the information to the entire network to generate
control plane topology information. The service plane generates TE LSA based on
TE link information provided by LMP, and floods the information to the entire
network to generate the service plane topology information.

 RSVP-TE is used as the signaling protocol to automatically establish and delete


service LSPs, modify LSP attributes, reroute, and optimize paths. After the NMS
delivers a service calculation request, RSVP-TE initiates a service calculation
request to OSPF and obtains service path information. LMP provides control link
information and error location information for RSVP-TE.
 LMP provides the following functions:

 Creating and maintaining the control channels between adjacent nodes.

 It is used to set up and maintain control channels between adjacent


nodes. The control channel maintained by LMP is used only for link
verification. The connectivity check and attribute consistency check
between adjacent nodes can be performed only after an available
control channel is available.

 Verifying TE link attribute consistency.

 It refers to a process in which LMP integrates multiple data links into


one traffic engineering (TE) link and synchronizes the attributes of this
TE link to ensure the consistency between the TE link attribute
configurations of the nodes at the two ends of the link.
 Control Channels

 The LMP creates and maintains the control channel between NEs. The control
channel then provides a physical channel for the LMP packets. The control
channels are classified into in-fiber and out-fiber control channels. The in-
fiber control channels automatically find and use the D4-D12 bytes of DCC.
The out-fiber control channel uses the Ethernet links, which should be
manually configured.

 TE Links

 TE link is a traffic engineering link. The ASON NE sends its bandwidth


information to other ASON NEs through the TE link to provide data for route
computation. As a kind of resources, TE links can be regarded as fibers that
have bandwidth information and protection attributes. However, the TE link
does not correspond to a fiber respectively, because each fiber may
correspond to many TE links. Currently, a fiber can be configured with one TE
link.

 The resources of a TE link can be classified into three types: non-protection


resources, working resources, and protection resources.
 The creating and management functions of the control channel negotiate and
maintain the control channel between adjacent nodes by exchanging Config
messages and executing the fast keep-alive mechanism (Hello protocol).
 The Config message contains the following information: Local ccid, sending
NodeId, message ID, and parameters negotiated with the control channel
(HelloInterval and HelloDeadInterval).
 Control channel management: When the control channel parameters are received
by two adjacent nodes, the control channel enters the Up state by sending Hello
messages at intervals of HelloInterval. The Hello protocol is enabled to maintain
the failure of the control channel and monitoring control channel. If no Hello
message is received within HelloDeadInterval, the control channel is considered
invalid.
 The figure shows the discovery process of the control channel:
 The Node A sends a Config message to the NodeB;
 If the Node A receives the ConfigAck message from the Node B, the control
channel is successfully established;
 If the Node A receives the ConfigNAck message from the NodeB, the
negotiation fails and the control channel fails to be set up;
 If the Node A does not receive the response message from the Node B, the
Node A sends the Config message to node B (dotted line in the figure),
occupying the control channel check resources.
 A data link is a pair of interfaces that can be used to transmit user data.

 The BeginVerify message includes the number of verified data links, interval for
sending test messages, supported coding mode and transmission mechanism, and
rate of test messages.

 The Test message contains the Verify_Id and the local Interface_Id.

 The data link connectivity verification process is as follows:

 When the verification starts, the active party sends a BeginVerify message to
the passive party. The message contains the following information: Number
of verified data links, interval for sending test messages, supported encoding
mode and transmission mechanism, and rate of test messages.

 The data link connectivity check sends a test message on each data link (J0
byte or DCC) to confirm the physical connectivity of the link and dynamically
obtain the data link ID and TE link ID of the peer end.
 The verification process is as follows:

 Node A sends a LinkSummary message to node B. The message contains the


link status information of the data link in the TE link on the local node,
including the local link ID, peer link ID, encoding type, maximum and
minimum bandwidth, used wavelength, and specific data link information.

 When node B accepts the port mapping and link features brought by the
LinkSummary message, node B sends a LinkSummaryAck message to node A.
Otherwise, node B must send a LinkSummaryNack message. If the
parameters are negotiable, the LinkSummaryNack message also needs to
contain the parameters recommended to the peer, so that the peer can
resend the LinkSummaryNack message containing the new parameters. After
this step, the adjacent nodes obtain the same link attributes. This interaction
is required for each TE link.

 In the preceding figure, the dotted line indicates that node A keeps sending
linkSummary packets to node B if the response message from node B is not
received.
 The routing protocol of Huawei OptiX GCP control plane uses the OSPF extended
protocol OSPF-TE. The main functions are as follows:
 Establish a neighbor relationship.
 Create and maintain control links.
 Flooding and collecting control link information of the control plane, and
generating routing information of the control plane, so as to provide a route
for forwarding message packets of the control plane.
 Floods and collects TE link information of the transport plane, and provides
network service topology information for computing service paths.
 An ASON NE establishes a neighbor relationship with an adjacent NE through
the OSPF protocol and transmits routing information between adjacent NEs.
The process of neighbor discovery is also the process of establishing a
control link. ASON NEs also use OSPF to flood control links and TE links on
the entire network and create a routing table to support route calculation.
 When establishing the neighbor relationship, NEs transmit Hello messages and
construct control links. The Link Status Advertisement (LSA) information is flooded
to the entire network to generate the control plane topology information. The
service plane generates the TE LSA based on the TE link information provided by
the LMP, and floods the information to the entire network to generate the service
plane topology information. CSPF is used to calculate routes. Each NE has a same
Link Status Database (LSDB) to store the LSA information collected from all NEs on
the network.
 Control Plane:
 The control interface sends Hello packets to discover neighbors and obtains
the local control link status.
 Flood Router-LSA packets. All NEs obtain the topology information of the
control plane on the entire network.
 After network convergence, route calculation is performed on the entire
network. Each NE obtains the route to another node based on the algorithm,
that is, the routing table. Then, the NE queries the routing table to determine
the interface that sends the control packet.
 Traditional Plane:
 Unlike control information, Hello packets do not need to be sent. Instead, the
local TE link status information is obtained according to the TE link obtained
by LMP.
 Flood TE-LSA packets to obtain the network-wide transport plane topology,
which serves as the basis for subsequent service route calculation.
 The Label Switching Path (LSP) is an end-to-end optical switching service path.

 Resource Reserved Protocol (RSVP) is a resource reservation protocol based on IP.


A user uses the RSVP protocol to request the network for the buffer and
bandwidth that meet the special QoS requirements. The intermediate node uses
the RSVP protocol to set up a resource reservation and maintain the channel on
the data transmission path to achieve the corresponding QoS.
 Process of establishing an LSP from node A to node C:
 The first node receives the service setup request and calculates the optimal
service path based on the topology of the entire network.
 The first node drives the host software to complete service establishment.
 Sending a service establishment request to the intermediate node, and
completing service establishment of another node by means of signaling
protocol interaction
 Node A requests to establish a reverse cross-connection according to
the path information calculated by the LSP, or establishes a forward and
reverse cross-connection at the same time, and then sends a Path
message to node B.
 After receiving the Path message from upstream node B, node B sends
an acknowledgment Ack message to node A, requesting to set up a
reverse cross-connection (or establishing a forward and reverse cross
connection at the same time), and then sends a Path message to node C.
 After receiving the Resv message from downstream node C, node B
sends an ACK message to node C, requesting to set up a forward cross-
connection. Then, node B sends a Resv message to node A.
 After receiving the Resv message from downstream node B, node A
sends an acknowledgment Ack message to node B and requests to set
up a forward cross-connection.
 Node A requests to enable alarm monitoring and sends an alarm Path
message to node B. The subsequent process is similar to that of the
Path message.
 The process of deleting an LSP from node A to node C is as follows:

 Node A shuts down the alarm monitoring switch corresponding to the LSP,
and then sends a Path message to node B.

 After receiving the Path message from upstream node B, node B sends an
acknowledgment Ack message to node A, disables the alarm monitoring
switch corresponding to the LSP, and then sends a Path message to node C.

 After receiving the Path message from upstream node C, node C sends an
ACK message to node B and disables the alarm monitoring function for the
LSP.

 Node C deletes the cross-connection corresponding to the LSP and sends a


PathErr message to upstream B.

 After receiving the PathErr message from downstream C, node B sends an


ACK message to node C, deletes the cross-connection corresponding to the
LSP, and then sends a PathErr message to node A.

 After receiving the PathErr message from downstream B, node A sends an


acknowledgment Ack message to node B and deletes the cross-connection
corresponding to the LSP.
 SLA for ASON services: diamond service, gold service, silver service, copper service,
and iron service.
 In the ASON network, the OSPF protocol discovers ASON NEs automatically by
sending the protocol packets.

 After discovering the neighbor NEs, the OSPF protocol floods the information
about the neighbor NEs to other NEs. In the end, every ASON NE in the domain
has the information about all ASON NEs in the entire network.

 When an ASON NE is added to an ASON network, other NEs are able to


automatically discover the new NE by using the OSPF protocol.

 When an ASON NE is removed from an ASON network (for example, power


off the NE, remove the system control board, or shut down the physical
channel), other NEs are able to automatically detect the missing of this NE.
 The ASON supports both traditional SDH services and end-to-end ASON services.
To configure an ASON service, you only need to specify its source node, sink node,
bandwidth requirement, and protection level. Service routing and cross-connection
at intermediate nodes are all automatically completed by the network. You can
also set explicit node, excluded node, explicit link and excluded link to constrain
the service routing.

 Compared with the service configuration of SDH networks, it fully utilizes the
routing and signaling functions of the ASON NEs and thus it is convenient to
configure services.

 The automatically switched optical network (ASON) supports the routing


computation policies based on the factors, such as bandwidth, distance, hop count,
and customized link cost. The user can select different routing computation
policies based on the service.
 A shared risk link group (SRLG) is used to improve reliability and speed of service
rerouting. In an ASON network, SRLG can be configured if some optical fibers are
contained in the same optical cable.
 The ASON provides mesh networking protection to enhance service survivability
and network security.

 The mesh networking, one of the major networking modes of an ASON system,
provides the following benefits:

 Flexibility and scalability.

 Compared with the traditional SDH networking mode, the mesh networking
does not need to reserve 50% bandwidth. Thus, it can save bandwidth
resources to satisfy increasingly large bandwidth demand.

 This networking mode also provides more than one recovery route for each
service so it can best utilize the network resources and enhance the network
security.
 Difference between protection and restoration:

 Generally, network protection involves the capacity pre-allocated among NEs.


For example, 1+1 protection is a simple protection scheme, whereas MSP is a
complicated protection scheme. ASON protection is automatically detected
and triggered by NEs and does not involve the management system. The
protection switching time is short and is not more than 50 ms. The backup
resources, however, cannot be shared in the network.

 Generally, network restoration involves the usage of any usable capacity


among NEs. Even the extra capacity of low priority can be used for
restoration. When a service trail fails, the network automatically searches for
a new route and switches the services from the faulty route. The algorithm
that restores is the same as the algorithm that selects the trail. Restoration
requires spare resources in the network for service rerouting. Service
rerouting involves the computation of routes. The service restoration takes a
relatively long time, which is always in seconds.
 Diamond services have the best protection ability. When there are enough
resources in the network, diamond services provide a permanent 1+1 protection.
Diamond services are applicable to voice and data services, VIP private line, such
as banking, security aviation.
 A diamond service is a service with 1+1 protection from the source node to the
sink node. It is also called a 1+1 service. For a diamond service, there are two
different LSPs available between the source node and the sink node. The two LSPs
should be as separate as possible. One is the working LSP and the other is the
protection LSP. The same service is transmitted to the working LSP and the
protection LSP at the same time. If the working LSP is normal, the sink node
receives the service from the working LSP; otherwise, the sink node receives the
service from the protection LSP. A diamond service supports sharing of the
working and protection LSPs.
 Requirements for creation:
 Sufficient non-protection resources are available between the source node
and the sink node.
 Protection and restoration:
 If the resources are sufficient, two LSPs are always available for a permanent
1+1 diamond service. One is the working LSP and the other is the protection
LSP.
 If the resources are not sufficient, one LSP can still be reserved for a
permanent 1+1 diamond service to ensure the service survivability.
 Whenever a fiber cut occurs and no free resources are available, the shared
LSP can be used for a permanent 1+1 diamond service to reroute the service
successfully.
 Protection and restoration:

 When the protection LSP fails, services are not switched. Rerouting is not
triggered.

 When the working LSP fails, services are switched to the protection LSP for
transmission. Rerouting is not triggered.

 When both the working and protection LSPs fail, rerouting is triggered to
create a new LSP to restore services.

 When both the working and protection LSPs fail and no resources are
available, the shared LSP can be used for a rerouting 1+1 diamond service to
reroute the service successfully.
 Protection and restoration:

 When the working LSP fails, services are switched to the protection LSP for
transmission. Rerouting is not triggered.

 When the protection LSP fails, services are not switched. Rerouting is not
triggered.

 When both the working and protection LSPs fail, rerouting is not triggered.
 Gold services are applicable to voice and significant data services. Compared with
diamond services, gold services have greater bandwidth utilization.
 A gold service needs only one LSP. This LSP must use working resource of TE links
or non-protection resource of TE links. When a fiber on the path of a gold service
is cut, the ASON triggers MSP switching to protect the service at first. If the
multiplex section protection fails, the ASON triggers rerouting to restore the
service.
 Requirements for creation
 Sufficient working resources or non-protection resources are available
between the source node and the sink node.
 Multiplex section protection
 Supports using the working resources of a 1:1 linear multiplex section
protection chain to create gold services.
 Supports using the working resources of a 1+1 linear multiplex section
protection chain to create gold services.
 Supports using the working resources of a 1:N linear multiplex section
protection chain to create gold services.
 Supports using the working resources of a two-fiber bidirectional multiplex
section protection ring to create gold services.
 Supports using the working resources of a four-fiber bidirectional multiplex
section protection ring to create gold services.
 Silver services, the revertive time is hundreds of milliseconds to several seconds.
The silver level service is suitable for those data or internet services that have low
real-time requirement.
 Silver services are also called rerouting services. When an LSP failure, the ASON
triggers rerouting to restore the service. If there are not enough resources, service
may be interrupted.
 Requirements for creation:
 Sufficient non-protection resources are available between the source node
and the sink node.
 Service restoration:
 When the original LSP fails, rerouting is triggered to create a new LSP to
restore services.
 Rerouting:
 Supports rerouting lockout.
 Supports rerouting priority.
 Supports four rerouting policies:
 Use existing trails whenever possible.
 Do not use existing trails whenever possible.
 No rerouting constraint.
 Use simulated section restoration.
 The copper services are seldom used. Generally, temporary services, such as the
abrupt services in holidays, are configured as copper services.

 Copper services are also called non-protection services. If an LSP fails, services do
not reroute and are interrupted.

 Requirements for creation:

 Sufficient non-protection resources are available between the source node


and the sink node.

 Service restoration:

 Does not support rerouting.


 The iron services are also seldom used. Generally, temporary services are
configured as iron services. For example, when service volume soars, during
holidays, the services can be configured as iron services to fully use the bandwidth
resources.
 An iron service is also called a preemptable service. Iron services apply non-
protection resources or protection resources of the TE link to create LSPs. When an
LSP fails, services are interrupted and rerouting is not triggered.
 When the iron service uses the protection resources of the TE link, if the MS
switching occurs, the iron service is preempted and the service is interrupted.
After the MS is recovered, the iron service is restored. The interruption,
preemption and restoration of the iron service are all reported to the NMS.
 When the iron service uses the non-protection resources, if the network
resources are insufficient, the iron service may be preempted by the rerouted
silver service or diamond service. Therefore, the service is interrupted.
 Requirements for creation:
 Sufficient protection resources or non-protection resources are available
between the source node and the sink node.
 To create iron services, the following resources can be used:
 Protection resources of 1:1 linear MSP.
 Protection resources of 1:N linear MSP.
 Protection resources of two-fiber bidirectional MSP.
 Protection resources of four-fiber bidirectional MSP.
 The service level agreement (SLA) is used to classify services according to the
service protection.
 Tunnels are mainly used to carry VC-12 or VC-3 services. Tunnels are also called as
ASON server trails.

 When lower order services are to be created, first create a VC-4 tunnel. The
protection level for the tunnel can be diamond, gold, silver or copper. Then, use
the management system to complete the configuration of the lower order service.

 The configuration of a tunnel is different from that of the above-mentioned service


types. Its cross-connection from the tributary board to the line board can only be
configured manually. There is a tunnel between NEs which can be a diamond
ASON server trail, gold ASON server trail, silver ASON server trail or copper ASON
server trail. During service creation, the ASON automatically chooses the line
boards and the timeslots of the line boards.

 After tunnels are created, you can create VC-12, VC-4, or VC-3 lower order services.
During rerouting or optimization of the tunnels, however, the cross-connections at
the source and sink nodes automatically switch to the new ports.

 Tunnel level: VC-4.


 The following alarms trigger the LSP rerouting:

 Port-level alarm: R_LOS, R_LOF, B2_EXC, B2_SD, MS_AIS, MS_RDI.

 Channel-level alarm: AU_AIS, AU_LOP, B3_EXC (Optional), and B3_SD


(Optional).
 After the topology changes several times, the ASON may have less satisfactory
routes and therefore requires service optimization. Service optimization involves
creating an LSP, switching the optimized service to the new LSP, and deleting the
original LSP to change and optimize the service without interrupting the service.
The service route can be restricted during the service optimization.

 LSP optimization has the following features:

 Only manual optimization is supported.

 The optimization does not change the protection level of the optimized
service.

 During optimization, rerouting, downgrade/upgrade, or deleting operations


are not allowed.

 During creation, rerouting, downgrading/upgrading, starting or deleting


operations, optimization is not allowed.

 The following service types support optimization: diamond, gold, silver,


copper, iron and tunnel services.
 The service association can be used to associate the same service accessed from
different points into the ASON network.

 An association service involves associating two ASON services that have different
routes. During the rerouting or optimization of either service, the rerouting service
avoids the route of the associated service. Service association is mainly used for
services (dual-source) accessed from two points.

 When the resources are not sufficient, the associated trail sharing function is
automatically used for the associated services and thus improves the survivability
of the services. Pay attention to the following points:

 When the associated trail sharing function is used for the associated services,
the association of the services cannot be cancelled, or the associated services
cannot be migrated to non-ASON services. Therefore, you need to optimize
the route for the services before cancelling the association and migrating the
associated services to non-ASON services.

 Tunnel services do not support the associated trail sharing function.

 Service creation:

 Supports the creation of associated services on the same ingress node or


different ingress nodes.

 Service optimization:

 Supports optimization of associated services.


 Support 4 rerouting polices:

 Use existing trails whenever possible:

 During rerouting, the route of the new LSP overlaps the original route
and use resources on the original route whenever possible. This policy is
applicable to improving network resource utilization on a network with
insufficient link resources.

 Do not use existing trails whenever possible:

 During rerouting, the route of the new LSP is separated from the
original route whenever possible. This policy is applicable to avoiding
the service restoration path from original faulty route whenever
possible on a network with sufficient link resources.

 No rerouting constraint:

 During rerouting, do not consider the impact of the original LSP and
whether the new path overlaps the original path, and select the current
optimal path based on the traffic engineering policy.

 Use simulated section restoration:

 During rerouting, use the original route whenever possible to compute


a new path in the span where the services are interrupted. If restoration
path computation fails, the policy "Use existing trails whenever
possible" is used instead for rerouting computation.
 Do not use the original trail resources. When ASON services are rerouted, do not
use the original trail resources to create new LSPs, thus improving the service
security.
 To optimize the network planning, the ASON provides the restoration trail
presetting function so that the rerouting can be performed according to the
requirement of the user when the current service trail fails. In this manner, the
controllability of service rerouting is improved.

 The ASON supports presetting of the restoration trail for the diamond, gold, and
silver ASON services. When an ASON service is rerouted, it is switched to the
preset trail if the preset trail is available.

 The ASON periodically checks whether the restoration trail is available and the
check interval is 60 minutes by default. If the restoration trail is not available, the
ASON automatically computes a new restoration trail to replace the current
restoration trail.

 After the replacement, the ASON reports the performance event related to
the change of the preset restoration trail.

 When the original restoration trail is not available and there is no substitute
for it, the ASON reports the performance event related to the unavailability
of the preset restoration trail.

 Preset restoration trails do not occupy actual resources.


 Before service creation and optimization, you can run the pre-computation
command to query the service path selected by the control plane based on the
current routing policy and constraints. In this way, you can select the most
appropriate path to create or optimize services.
 After many changes in an ASON network, service routes may differ from the
original routes. ASON allows a service to return to its original path after faults on
the original path are rectified.
 Generally, the route during ASON service creation is the original route of the
ASON service. If the original route recovers after rerouting of the ASON service,
the service can be adjusted to the original route automatically or manually.
 ASON trails are classified into non-revertive ASON trails, automatically revertive
ASON trails and scheduled revertive ASON trails.
 For non-revertive ASON trails, the resources on the original path are not
retained after an ASON service is routed to a new trail. After the fault on the
original trail is rectified, the service is not reverted to the original trail. You
can specify whether to revert the service to the original timeslot or port.
 For automatically revertive ASON trails, if the fault on the original trail is
rectified after an ASON service is routed to a new trail, the service is
automatically reverted to the original trail after the specified WTR time. You
can manually revert the service to the original trail.
 For scheduled revertive ASON trails, a fault of the original route is cleared
after the rerouting of a service in the scheduled reversion mode, you can set
scheduled revertive time to enable the service to revert at a specified time.
 Services configured on revertive ASON trails (including automatically
revertive ASON trails and scheduled revertive ASON trails) can be forcibly
reverted to the original trail no matter whether the original trail has
recovered.
 When a service configured with the shared mesh restoration trail reroutes, the
service uses the resources on this trail with priority. If all resources on the shared
mesh restoration trail are usable, these resources are used for service restoration.
If only partial resources on the shared mesh restoration trail are usable, these
resources are used with priority for computation of a restoration trail. The other
resources may be faulty or used by other services that share the trail.

 The shared mesh restoration trail has the following features:

 Only the revertive silver service can be configured with the shared mesh
restoration trail.

 A shared mesh restoration trail cannot be set to concatenation services at


different levels.

 For a silver service configured with the shared mesh restoration trail, the
revertive attribute cannot be changed.

 The resources on a shared mesh restoration trail can only be the unprotected
resources of TE links.

 For a silver service configured with the shared mesh restoration trail, do not
set the preset restoration trail.
 The ASON software supports the conversion between ASON services, and between
ASON services and traditional services. The service conversion is in-service
conversion, which would not interrupt the services.

 Service Migration between ASON Trails and Permanent Connections.

 Currently, ASON software supports:

 Migration between diamond services and permanent SNCP connections.

 Migration between gold services and permanent connections.

 Migration between silver services and permanent connections.

 Migration between copper services and permanent connections.

 Migration between iron services and permanent connections.

 Migration between tunnel services and server trail.


 Currently, Huawei's ASON software supports:

 Migration between a diamond, gold, silver, copper service.

 Migration between a diamond, gold, silver, copper tunnels.


 Merging an ASON network with a traditional SDH network:

 An ASON network can be used with an SDH network to form a hybrid


network. In this case, an end-to-end service can be managed and created in a
centralized manner.

 You can also create a PC service type on an ASON network. After the service
type is created, some resources are reserved by the PC service and are no
longer allocated by the ASON software.
 1. Reference answer: diamond, gold, sliver, copper, iron.

 2. Reference answer: the ASON network supports multiple service types of


different levels and different protection types. The traditional network has a single
service and protection.
 If multiple 2 Mbit/s services share distributed VC-4 trails, the services cannot be
protected against node failures.
 2M ASON services emerge to address the previously mentioned disadvantages of
traditional ASON services. With 2M ASON services deployed, multiple 2 Mbit/s
services can share VC-4 and be protected against node failures, implementing 2
Mbit/s service-based protection as well as increasing the bandwidth usage.

 2M ASON services support ASON rerouting. After an intermediate node fails, 2M


ASON services can automatically select a proper VC-12 virtual TE link, and
therefore are protected against a node failure.
 Normal:

 Service 1: NE1–>NE2–>NE3

 Service 2: NE1–>NE2–>NE6

 Service 3: NE4–>NE2–>NE3

 Fault 1:

 Service 1: NE1–>NE5–>NE2–>NE3

 Service 2: NE1–>NE5–>NE2–>NE6

 Service 3: NE4–>NE2–>NE3

 Fault 2:

 Service 1: NE1–>NE5–>NE3

 Service 2: NE1–>NE5–>NE6

 Service 3: NE4–>NE5–>NE3

 Fault 3:

 Service 1: NE1–>NE2–>NE5–>NE3

 Service 2: NE1–>NE2–>NE6

 Service 3: NE4–>NE2–>NE6–>NE3
 ASON networks can provide reliable transmission planes for differential protection
services because the networks meet the requirements on transmission paths and
delays and protect the services against multiple fiber breaks.
 The forward and backward paths are consistent.
 Differential protection devices determine the current sampling interval based
on the transmission delay. If the forward and backward paths are inconsistent,
the calculated unidirectional channel delay is incorrect, resulting in sampled
data errors. When the calculated current difference exceeds the threshold,
the differential protection devices automatically shut down the transmission
line for protection, causing blackouts.
 The service interruption time in the case of trail changes is longer than 100 ms.
 When detecting an exception on the working trail, differential protection
devices switch services to the protection trail. If the service trail change time
is less than 100 ms, the differential protection devices cannot detect the
change and still sample data at the interval before the change. As a result,
the sampled data is incorrect. When the current difference exceeds the
threshold, the differential protection devices automatically shut down the
transmission line for protection, causing blackouts.
 The unidirectional transmission delay is shorter than 10 ms.
 In order to meet the buffer limit requirements for differential protection
devices.
 Normal: NE1<->NE4.

 Fault 1: NE1<->NE2<->NE5<->NE4.

 Fault 2: NE1<->NE2<->NE3<->NE6<->NE5<->NE4.

 Fault 3: NE1<->NE2<->NE3<->NE6<->NE8<->NE7<->NE4.
 Answer:

 The forward and backward paths are consistent.

 The service interruption time in the case of trail changes is longer than 100
ms.

 The unidirectional transmission delay is shorter than 10 ms.


Recommendations
 Huawei Learning Website
 http://learning.huawei.com/en

 Huawei e-Learning
 https://ilearningx.huawei.com/portal/#/portal/ebg/51

 Huawei Certification
 http://support.huawei.com/learning/NavigationAction!createNavi?navId=_31
&lang=en

 Find Training
 http://support.huawei.com/learning/NavigationAction!createNavi?navId=_trai
ningsearch&lang=en

More Information
 Huawei learning APP

版权所有© 2018 华为技术有限公司

You might also like