A fundamental aspect in the regulation of the continuity of electricity supply is the identification of faults that could be caused by an exceptional event and, therefore, that are outside the utility control and responsibility. Different methods have been proposed during the years: the interpretation of the observed faults as a signal of an underlying system naturally leads to the analysis of the problem by means of a hidden Markov model. Thesemodels, in fact, are widely used for introducing dependence in data and/or for modeling observed phenomena depending on hidden processes. The application of this method shows that the model is able to identify exceptional events; moreover, the study of the estimated model parameters gives rise to reality-linked considerations.
The feasibility of future multiterminal dc (MTDC) systems depends largely on the capability to withstand dc-side faults. Simulation models of MTDC systems play a very important role in investigating these faults. For such studies, the test system needs to be accurate and computationally efficient. This paper proposes a detailed equivalent model of the modular multilevel converter (MMC), which is used to develop the MTDC test system. The proposed model is capable of representing the blocked-mode operation of the MMC, and can be used to study the balancing control of the capacitor voltages. In addition, the operation of the MMC when redundant submodules are included in the arms can also be studied. A simplified model of a hybrid high-voltage dc breaker is also developed. Hence, the developed test system is capable of accurately describing the behavior of the MMC-based MTDC system employing hybrid HVDC breakers, during fault conditions. Using time-domain simulations, permanent dc-side faults are studied in the MTDC system. In addition, a scheme to control the fault current through the MMC using thyristors on the ac side of the converter is proposed.
Real-time hardware-in-the-loop performance assessmentof three different passive islanding detection methodsfor local and wide-area synchrophasor measurements is carriedout in this paper. Islanding detection algorithms are deployedwithin the phasor measurement unit (PMU) using logic equations.Tripping decisions are based on local and wide-area synchrophasorsas computed by the PMU, and trips are generated usingIEC 61850-8-1 generic object-oriented substation event messages.The performance assessment compares these islanding detectionschemes for the nondetection zone and operation speed underdifferent operating conditions. The testbench that is demonstratedis useful for a myriad of applications where simulation exercises inpower system computer-aided design software provide no realisticinsight into the practical design and implementation challenges.Finally, different communication latencies introduced due tothe utilization of synchrophasors and IEC 61850-8-1 GOOSEmessages are determined.
In distribution system planning and operation, accurate assessment of reliability performance is essential for making informed decisions. Also, performance-based regulation, accompanied by quality regulation, increases the need to understand and quantify differences in reliability performance between networks. Distribution system reliability performance indices exhibit stochastic behavior due to the impact of severe weather. In this paper, a new reliability model is presented which incorporates the stochastic nature of the severe weather intensity and duration to model variations in failure rate and restoration time. The model considers the impact of high winds and lightning and can be expanded to account for more types of severe weather. Furthermore, the modeling approach considers when severe weather is likely to occur during the year by using a nonhomogeneous Poisson process (NHPP). The proposed model is validated and applied to a test system to estimate reliability indices. Results show that the stochasticity in weather has a great impact on the variance in the reliability indices.
This paper presents the effects of load impedance, line length and branches on the performance of medium-voltage power-line communication (PLC) network. The power-line network topology adopted here is similar to that of the system in Tanzania. Different investigation with regard to network load impedances, direct line length (from transmitter to receiver), branched line length and number of branches has been investigated. From the frequency response of the transfer function (ratio of the received and transmitted signal), it is seen that position of notches and peaks in the magnitude and phase responses are largely affected in terms of attenuation and dispersion by the above said network parameters/configuration. These are observed in the time domain responses too. The observations presented in the paper could be helpful in suitable design of the PLC systems for a better data transfer and system performance.
Power-line networks are promising mediums by which broadband services can be offered, such as Internet services, voice over Internet protocol, digital entertainment, etc. In this paper, an analysis of delay spread, coherence bandwidth, channel capacity, and averaged delay in the frequency bands up to 100 MHz for typical indoor power-line networks are studied. Earlier studies for indoor power-line networks considered frequencies up to 30 MHz only and earlier works have shown that at these frequency bands, the data rates are generally low and are inefficient for digital entertainment in comparison with wireless local-area networks standards, such as IEEE 802.11n. In this paper, it is shown that at 100 MHz, the average channel capacity for typical indoor power-line networks can be up to 2 Gb/s and it is found that by increasing the number of branches in the link between transmitting and receiving ends, the average channel capacity decreases from 2 Gb/s to 1 Gb/s (when the number of branches was increased by four times for a power spectral density of 60 dBm/Hz). At the same time, the coherence bandwidth decreased from 209.45 kHz to 137.41 kHz, which is much better than the coherence bandwidths corresponding to 30-MHz systems. It is therefore recommended to operate the indoor power-line networks at 100-MHz bandwidths for a wide variety of broadband services.
Power-line networks are an excellent infrastructure for broadband data transmission. However, various multipaths within a broadband power-line communication (BPLC) system exist due to stochastic changes in the network load impedances, branches, etc. This further affects network performance. This paper attempts to investigate the performance of indoor channels of a BPLC system that uses orthogonal frequency-division multiplexing (OFDM) techniques. It is observed that when a branch is added in the link between the sending and receiving end of an indoor channel, an average of 4-dB power loss is found. Additionally, when the terminal impedances of the branch change from the line characteristic impedance to impedance of lower values, the power loss (signal-to-noise ratio) is about 0.67 dB/Omega. On the contrary, for every increase in the terminal impedances by 100 Omega, above the line characteristic impedance, the power loss is 0.1. dB/Omega. When the line terminal impedances are close to short or open circuits, OFDM techniques show degraded performance. This situation is also observed when the number of branches increases. In this paper, it is shown that to overcome such performance degradation, the concatenated Reed-Solomon codes/interleaved Viterbi methods can be used. The observations presented in the paper could be useful for an efficient design of a BPLC system that uses OFDM techniques.
Power-line networks are proposed for broadband data transmission. The presence of multipaths within the broadband power-line communication (BPLC) system, due to stochastic changes in the network load impedances, branches, etc. pose a real challenge as it affects network performance. This paper attempts to investigate the performance of an orthogonal frequency-division multiplexing (OFDM)-based BPLC system that uses underground cables. It is found that when a branch is added in the link between the sending and receiving end, there is an average of 4-dB power loss. In addition, when the terminal impedances of the branches that are connected to the link between the transmitting and receiving end vary from line characteristic impedance to low-impedance values, the power loss (signal-to-noise ratio) is about 0.35 dB/Omega. On the contrary, for an increase in the terminal impedances by 100 Omega above line characteristic impedance, the power loss is 0.23 dB/Omega. When the branch terminal impedances are close to short or open circuits, OFDM techniques show degraded performance. This situation is also observed when the number of branches increases. It is shown that to overcome degraded network performance, the concatenated Reed-Solomon codes/interleaved Viterbi methods can be used, which could be used for an efficient design of the BPLC system that uses OFDM techniques.
This paper presents a generalized transmission-line approach to determine the transfer function of a power-line network of a two-conductor system (two parallel conductors) with distributed branches. The channel frequency responses are derived considering different terminal loads and branches. The model's time-domain behavior is validated using commercial power system simulation software called Alternative Transients Program-Electromagnetic Transients Program (ATP-EMTP). The simulation results from the model for three different topologies considered have excellent agreement with corresponding ATP-EMTP results. Hence, the model can be considered as a tool to characterize any given power-line channel topology that involves the two-conductor system. In the companion paper (Part II), the proposed method is extended for a multiconductor power-line system.
In this paper, we present an approach to determine the transfer function for multiconductor power-line networks with distributed branches and load terminations for broadband power-line communication (BPLC) applications. The applicability of the proposed channel model is verified numerically in time domain using the finite-difference-time domain (FDTD) method for the solution of transmission lines. The channel model simulation results are in excellent agreement with the corresponding FDTD results. The model therefore could be useful in the analysis and design of BPLC systems involving multiconductor power-line topology.
An underground cable power transmission system is widely used in urban low-voltage power distribution systems. In order to assess the performance of such distribution systems as a low-voltage broadband power-line communication (BPLC) channel, this paper investigates the effects of load impedance, tine length, and branches on such systems, with special emphasis on power-line networks found in Tanzania. From the frequency response of the transfer function (ratio of the received and transmitted signals), it is seen that the position of notches and peaks in the magnitude are largely affected (observed in time-domain responses too) by the aforementioned network configuration and parameters. Additionally, channel capacity for such PLC channels for various conditions is investigated. The observations presented in this paper could be helpful as a suitable design of the PLC systems for better data transfer and system performance.
Recently, different models have been proposed for analyzing the broadband power-line communication (BPLC) systems based on transmission-line (TL) theory. In this paper, we make an attempt to validate one such BPLC model with laboratory experiments by comparing the channel transfer functions. A good agreement between the BPLC model based on TL theory and experiments are found for channel frequencies up to about 100 MHz. This work with controlled experiments for appropriate validation could motivate the application and extension of TL theory-based BPLC models for the analysis of either indoor or low-voltage or medium-voltage channels.
The power line has been proposed as a solution to deliver broadband services to end users. Various studies in the recent past have reported a decrease in channel capacity with an increase-in the number of branches for a given channel type whether it is an indoor or low-voltage (LV) or medium-voltage (MV) channel. Those studies, however, did not provide a clear insight as to how the channel capacity is related to the number of distributed branches along the line. This paper attempts to quantify and characterize the effects of channel capacity in relation to the number of branches and with different terminal loads for a given type of channel. It is shown that for a power spectral density (PSD) between -90 dBm/Hz to - 30 dBm/Hz, the channel capacity decreases by a 20-30 Mb/s/branch, 14-24 Mb/s/branch, and a 20-25 Mb/s/branch for an MV channel, LV channel, and indoor channel, respectively. It is also shown that the channel capacity is minimum when the load impedance is terminated in characteristic impedances for any type of channel treated here. It is shown that there could be a significant loss in channel capacity if a ground return was used instead of a conventional adjacent conductor return. The analysis presented in this paper would help in designing appropriate power-line communication equipment for better and efficient data transfer.
Estimation of electromagnetic (EM)-field emissions from broadband power-line communication systems (BPLC) is necessary, because at its operating frequencies, the radiated emis sions from BPLC systems act as sources of interference/crosstalk to other radio-communication systems. Currently, the transmission-line (TL) system used for BPLC is complex, involving arbitrarily/irregularly distributed branched networks, arbitrary termination loads, varying line lengths, and line characteristic impedance. In order to study the electromagnetic-compatibility (EMC) issues associated with the radiated emissions of such complex BPLC networks, knowledge of current and voltage distributions along the length of the power-line channels is needed. This paper attempts to derive and present generalized expressions for either the current or voltage distribution along the line (whose TL parameters are known) between the transmitting and receiving ends for any line boundary condition and configuration based on the TL theory. The expressions presented in this paper could be beneficial for direct calculation of EM emissions from BPLC systems.
This paper presents the effects of load impedance, line length and branches on the performance of an indoor voltage broadband power line communications (BPLC) network. The power line network topology adopted here is similar to that of the system found in Tanzania. Different investigations with regard to network load impedances, direct line length from transmitter to receiver, branched line length, and number of branches has been carried out. From the frequency response of the transfer function (ratio of the received and transmitted signal), it is seen that position of notches and peaks in the magnitude and phase responses are largely affected by the above said network parameters/configuration, mainly in terms of attenuation and dispersion. These effects are observed in the time domain responses also. The observations presented in the paper could be helpful in the suitable design of the BPLC systems for a better data transfer and system performance.
This paper presents the influence of line length, number of branches (distributed and concentrated), and terminal impedances on the performance of a low-voltage broadband power-line communication channel. For analyses, the systems chosen are typical low-voltage power-line networks found in Tanzania. The parameters varied were the network's load impedances, direct line length (from transmitter to receiver), branched line lengths, and number of branches. From the frequency responses of the transfer functions (ratio of the received and transmitted signal), it is seen that the position of notches and peaks in the amplitude responses are affected by the aforementioned network parameters and topology. As a result, the time-domain responses are attenuated and distorted. Time-domain responses of power-line channels under various conditions are also investigated for a given pulse input at the transmitter. The observations presented in this;paper could be useful for suitable power-line communication system design.
Information and communications technologies (ICTs) are gaining importance in developing countries. Power-line network is a potential infrastructure for ICT services provision. Power-lines are highly interconnected network with stochastic variation in number of branches. Under such distributed network conditions the design of a broadband power-line communication (BPLC) system is a challenge. In this paper a case study of an actual power-line network, representative of a low-voltage BPLC channel in Dar es Salaam, Tanzania is considered. We shall investigate the performance of such a low-voltage channel that uses orthogonal frequency division multiplexing (OFDM) technique with binary phase shift keying (BPSK) modulation scheme for communication. For sensitivity analysis, three different transmitter locations were chosen and receiver points were varied to identify the possible degraded performance scenarios. Analysis show that in the frequency bands of 100 MHz, the channel delay spread for such networks is about 4 mu s, giving a maximum number of subchannels 4096 with 512 cyclic prefix. To improve the degraded performance scenarios, the concatenated Reed Solomon outer code with punctured convolution inner code was applied to the network. It was found that when the branches were terminated by its corresponding characteristic impedances the performance is improved by 1.0-20 dB compared to a corresponding uncoded system. On the contrary for a coded system when the branches were terminated either in low or higher impedances compared to branch characteristic impedances the improvement was greater than 2-15 dB. This study demonstrates that the specification proposed by IEEE-802.16 Broadband wireless access working groups can be used for performance improvement of distributed low-voltage systems.
Direct current circuit breakers (DCCBs) have become a large research topic and are considered one of the critical components for future DC grids. Proposed DCCB concepts may be grouped into hybrid DCCBs and active resonant DCCBs. In this work, the enhanced active resonant (EAR) DCCB family is introduced. EAR DCCBs combine elements of hybrid and active resonant DCCBs. The EAR DCCB family consists of one unidirectional and six bidirectional concepts. All concepts feature proactive commutation. The main characteristic of the EAR DCCBs is that discharge closing switches are used instead of semiconductors with turn-off capability. Relevant discharge closing switch technology is reviewed, a laboratory prototype is explained, and experimental results are presented to demonstrate the feasibility of the proposed DCCB concepts.
In this paper, a generalized leader inception model isproposed. It is based on an iterative geometrical analysis of thebackground potential distribution of an earthed structure to simulatethe first meters of propagation of an upward connecting leader.By assuming a static field approach, the leader stabilization fieldsand the striking distances were computed for a lightning rod andfor a building. The obtained results were compared with the existingleader inception criteria. Furthermore, in order to validatethe model, the leader inception condition was computed for a triggeredlightning experiment. Excellent agreement with the experimentalresults was obtained. The present model has several advantagesin comparison with the existing leader inception criteria.One of them is related to the fact that the proposed model can beused to analyze the effect of the space charge on the upward leaderinception.
In this paper, the problem of online estimation of subsynchronous frequency components in the measured grid voltage is treated. Two estimation methods, one based on the use of lowpass filters and a recursive least square algorithm, are investigated and compared. In particular, due to its higher degree of freedom in the design, the lowpass filters-based estimation method is found to be the most appropriate and accordingly further analyzed. The method is improved to cope with inaccurate knowledge of the subsynchronous frequency. The simulation results prove the effectiveness of the investigated method both for the ideal and the disturbed case.
In this paper, a novel control strategy for subsynchronous resonance (SSR) mitigation using a static synchronous series compensator (SSSC) will be presented. The SSSC is constituted by three single-phase voltage source converters. SSR mitigation is obtained by increasing the network damping only at those frequencies that are critical for the turbine-generator shaft. This is achieved by controlling the subsynchronous component of the grid current to zero. Using the IEEE First Benchmark Model, the effectiveness of the proposed control algorithm when mitigating SSR due to torsional interaction and torque amplification effect will be shown.
In this paper, a novel control strategy for subsynchronous resonance (SSR) mitigation using a static synchronous series compensator will be presented. SSR mitigation is obtained by increasing the network damping only at those frequencies that are critical for the turbine-generator shaft. This is achieved by controlling the subsynchronous component of the grid current to zero. Using the IEEE First Benchmark Model, the effectiveness of the proposed control algorithm when mitigating SSR due to torsional interaction and torque amplification effect will be shown.
Phasor-based wide-area monitoring and control (WAMC) systems are becoming a reality with increased research, development, and deployments. Many potential control applications based on these systems are being proposed and researched. These applications are either local applications using data from one or a few phasor measurement units (PMUs) or centralized utilizing data from several PMUs. An aspect of these systems, which is less well researched, is the WAMC system's dependence on high-performance communication systems. This paper presents the results of research performed in order to determine the requirements of transmission system operators on the performance of WAMC systems in general as well as the characteristics of communication delays incurred in centralized systems that utilize multiple PMUs distributed over a large geographic area. This paper presents a summary of requirements from transmission system operators with regards to a specific set of applications and simulations of communication networks with a special focus on centralized applications. The results of the simulations indicate that the configuration of central nodes in centralized WAMC systems needs to be optimized based on the intended WAMC application.
Wide-area monitoring and control (WAMC) systems are the next-generation operational-management systems for electric power systems. The main purpose of such systems is to provide high resolution real-time situational awareness in order to improve the operation of the power system by detecting and responding to fast evolving phenomenon in power systems. From an information and communication technology (ICT) perspective, the nonfunctional qualities of these systems are increasingly becoming important and there is a need to evaluate and analyze the factors that impact these nonfunctional qualities. Enterprise architecture methods, which capture properties of ICT systems in architecture models and use these models as a basis for analysis and decision making, are a promising approach to meet these challenges. This paper presents a quantitative architecture analysis method for the study of WAMC ICT architectures focusing primarily on the inter-operability and cybersecurity aspects.
The self-consistent leader inception and propagation model is used to analyze the influence of the phase voltage on the attachment of lightning to ultra-high voltage power transmission lines (UHV-TLs). An UHV-ac line with shielding failures reported in the literature is used as a case study. It is shown that the length of upward leaders initiated fromconductors and their striking distances are longer under positive voltages than when energized with the opposite polarity. Therefore, the fraction of shielding failures of each conductor changes significantly with the phase angle in ac lines. However, it is found that the overall effect of voltage on lightning attachment can also be limited by the electrostatic screening produced by shield wires and their leaders. This proximity effect mainly reduces the velocity of upward leaders launched from energized conductors. Therefore, the effect of voltage on the lightning attachment process cannot be generalized since it is strongly coupled to the proximity of shield wires and their associated leaders. Thus, the lightning shielding performance should consider case-to-case variations in the upward leader velocity in different UHV-TLs designs, given not only by the line voltage but also coupled to the proximity of other wires and their launched leaders.
This paper proposes a method to investigate the socioeconomical aspects of transformer overloading during a cold load pickup (CLPU) in residential areas. The method uses customer damage functions to estimate the cost for their power interruption and a deterioration model to estimate the cost for transformer wear due to the CLPU. A thermodynamic model is implemented to estimate the peak and the duration of cold residential load. A stochastic differential equation is used to capture the volatility of the load and to estimate the probability for transformer overloading. In a numerical example, an optimal cold load pickup for a two-area system is demonstrated where transformer overloading is allowed. In this example, an ambient temperature threshold is identified, where transformer overloading is socioeconomically beneficial.
Thermostatically controlled devices, such as air conditioners, heaters, and heat pumps may cause cold load pickup (CLPU) problems after a prolonged blackout. This causes an increased load on the power components in the electrical grid. The result is unpredictable aging and increased risk of failure. Quantifying this risk is crucial for efficient asset management for cost-intensive components such as the transformer. This paper presents a new approach to model the loading profile of a CLPU using stochastic differential equations. The realization of the loading profile is used to determine the aging of a transformer. Two models for the deterioration of transformer solid insulation represent the loss of life due to the CLPU. A comparison between two models for the aging of the solid insulation in the transformer is made in a case study. Due to the stochastic behavior of the load, there is a probability for loading the transformer above the recommended ratings, and this probability is estimated with Monte Carlo simulations.
The stability of an interconnected ac/dc system is affected by disturbances occurring in the system. Disturbances, such as three-phase faults, may jeopardize the rotor-angle stability and, thus, the generators fall out of synchronism. The possibility of fast change of the injected powers by the multiterminal dc grid can, by proper control action, enhance this stability. This paper proposes a new time optimal control strategy for the injected power of multiterminal dc grids to enhance the rotor-angle stability. The controller is time optimal, since it reduces the impact of a disturbance as fast as possible, and is based on Lyapunov theory considering the nonlinear behavior. The time optimal controller is of a bang-bang type and uses wide-area measurements as feedback signals. Nonlinear simulations are run in the Nordic32 test system implemented in PowerFactory/DIgSILENT with an interface to Matlab where the controller is implemented.
The security region of a power system is an important and timely issue; different stability criteria may be limiting. Rotor-angle stability can be improved by modulating active power of installed high-voltage direct current (HVDC) links. This paper proposes a new centralized nonlinear control strategy for coordinating several point-to-point and multiterminal HVDC systems based on Lyapunov theory. The proposed control Lyapunov function is negative semi-definite along the trajectories and uses the internal node representation of the system. The proposed control Lyapunov function increases the domain of attraction and, thus, improves the rotor-angle stability. Nonlinear simulations are performed on the IEEE 10-machine 39-bus system which shows the effectiveness of the controller. In comparison, simulations using the conventional lead-lag controller are also run.
In this paper, a methodology is presented to optimize the dc voltage droop settings in a multiterminal voltage-source converter high-voltage direct-current system with respect to the ac system stability. Implementing dc voltage droop control enables having multiple converters assisting the system in case of a converter outage. However, the abrupt power setpoint changes create additional stress in the ac system, especially when multiple converters are connected to the same interconnected ac system. This paper presents a methodology to determine optimizd converter droop settings in order to not compromise the ac system stability, thereby taking into account the adverse effect the droop control actions have on the interconnected ac system. Developing a disturbance model of the interconnected ac/dc system, the principal directions indicate the gain and directionality of the disturbances; from this, optimal droop settings are derived to minimize the disturbance gain.
The contribution of this paper is the application of subspace system identification techniques, to derive a low-order black-box state-space model of a power system with many controllable devices using global signals. This model is a multiinput, multioutput open system model describing the power oscillatory behavior of the power system. The input signals are the controllable setpoints of the controllable devices, the output signals are the speed of selected generators measured by a wide-area measurement system. This paper describes how to achieve and preprocess the data to use subspace techniques to estimate and validate to finally assign an accurate model. This new approach can be used directly to design a central coordinating controller for all of the relevant controllable devices, with the aim to increase the damping of the modes in the system. Previously presented methods use local measurements or output signals dependent on the actual operational point. The benefit of the presented method is that the used output signals are independent of the system state. This makes it possible to use state-feedback control to combine the controllable devices to coordinately damp the modes. The presented method is applied in the CIGRÉ Nordic 32-bus system including two HVDC links. The case study demonstrates that accurate low-order state-space models can be estimated and validated by using the described method to accurately model the system's power oscillatory behavior.
Electromagnetic interference, man-made noise, and multipath effects are the main causes of bit errors in power-line communication. In this paper, it is experimentally demonstrated that the power-line noise distribution is non-Gaussian below 12 MHz, and close to Gaussian in the 12-50 MHz range. The noise-amplitude distribution of each individual frequency in the spectrum is analyzed and the generalized Gaussian distribution (GGD) is introduced as a suitable noise model across the spectrum. Orthogonal frequency-division multiplexing (OFDM) with convolutional coding and soft-Viterbi decoding is adopted to design a GGD-optimal communication system. Simulations demonstrate the performance improvement offered over the Gaussian-optimal receiver. The channel simulations are verified through measurements.
Multipath phenomenon lies in the heart of power-line communication and leads to the reception of multiple replicas of the transmit signal at the receiver through various paths. Statistical knowledge of arriving paths is essential in order to evaluate performance of communication systems. First arriving path is distinguishable from the other paths in the sense that it experiences less reflection and less attenuation along its propagation path, giving it a favorable position regarding detectability. In this study, statistics of the first arriving path are initially investigated. It is shown that the first arriving path can be defined with log-Normal probability density function. It is seen that the mean of the approximating log-Normal variable decreases with an increasing number of branches between transmitter and receiver while its variance increases. The same finding is also observed when the maximum number of branches that extend out a branching node is increased. Although statistics of the first arriving path are emphasized more, statistical characterization of the other paths is discussed as well. Infinite bandwidth assumption in which all paths arriving at the receiver can be resolved is considered in the analysis. However, a brief discussion on the impact of finite bandwidth is given.
Travelling wave based protection functions require significantly higher sampling rates than protection functions based on time-domain superimposed quantities or fundamental frequency phasors. Integrating these high sampling rates in digital substations leads to a significant increase of the communication load of process-level networks and causes high computational cost for centralized protection systems. This paper builds on a distributed signal processing approach, which allocates the filtering operations among standalone merging units (SAMU). In particular, the paper presents a decimation filter design to integrate signals with high sampling rates at the process-level. Thereby, signal sequences with high, medium and low sampling rates are provided according to the requirements of the respective protection function. The decimation filter structure is optimized with respect to time delay by the Remez exchange algorithm and with respect to computational cost by a multistage approach and a sparse filter design algorithm. In addition, the filter response is verified against the accuracy constraints defined in IEC 61869-6 and IEC 61869-13. The paper shows that it is feasible to distribute the filtering operations among SAMUs, while keeping the time delay below the maximum allowable processing delay of 2 ms.
Multi-terminal HVDC systems equipped with multiple DC breakers require a protection system that selectively detects faults within a few milliseconds. In the development and application of such protection algorithms, time-domain simulation studies of various faults are essential when determining the setting and verifying the performance. However, when such protections are applied in practice, it should be expected that the models will not perfectly represent the behavior of the real system or transmission line, thereby possibly causing protections to operate unreliably. This paper presents a simulation process that evaluates the consequences for a protection algorithm when the transmission line parameters are varied. More specifically, the false differential current due to cable model parameter errors is evaluated in a traveling-wave differential protection applied in a multi-terminal HVDC system during an external fault. The purpose is to identify the most critical parameters and evaluate how inaccuracies influence the protection performance when applied in practice. It is shown that the parameters which significantly influence the propagation delay are critical for achieving the ideal performance - a strictly zero differential current during external faults.
This paper introduces a new method for the preliminary design of power controllers (PCs) in the electric power grid. The method, which is denoted the ideal phase-shifter (IPS) method, utilizes the concept of the power controller plane where the active power of the PC line is plotted versus the difference in voltage angle between the PC terminals. The power controller plane makes it possible to graphically visualize the working area of a PC in a power grid and thus determine the grid situations which are dimensioning for the PC. The IPS method offers the possibility of plotting the grid characteristics in the power controller plane which are unbiased with respect to the reactive properties of the PC. This makes the method suitable for comparison and preliminary design of PCs of different types and with different characteristics by simple geometrical considerations. In this process, the IPS method uses the power-flow control method for deriving the PC characteristics. This paper includes an application example of the method where it is used for dimensioning of two different PCs in a 26-bus test system.
This paper presents the impact of different explanatory variables such as remote control availability and conducted preventive maintenance, among others, on failure statistics of a disconnector population in Sweden using the proportional hazard model. To do so, 2191 work orders were analysed which included 1626 disconnectors and 278 major failures. Here, the results show that the remote control availability for disconnectors - an example of such Smart Grid technology - has a negative effect on the failure rate, whereas preventive maintenance has a positive impact. It is also shown that the disconnector age is not significant and that certain disconnector types have a significant and positive correlation towards failures when compared to other disconnector types. The results increase the understanding of disconnector failures to improve asset management.
The failure rate is essential in power system reliability assessment and thus far, it has been commonly assumed as constant. This is a basic approach that delivers reasonable results. However, this approach neglects the heterogeneity in component populations, which has a negative impact on the accuracy of the failure rate. This paper proposes a method based on risk functions, which describes the risk behavior of condition measurements over time, to compute individual failure rates within populations. The method is applied to a population of 12 power transformers on transmission level. The computed individual failure rates depict the impact of maintenance and that power transformers with long operation times have a higher failure rate. Moreover, this paper presents a procedure based on the proposed approach to forecast failure rates. Finally, the individual failure rates are calculated over a specified prediction horizon and depicted with a 95% confidence interval.
A nonuniform transmission line approach is adopted in this paper for modeling the transient behavior of different types of grounding systems under lightning strikes in time domain by solving Telegrapher's equations based on finite-difference time-domain (FDTD) technique. Electromagnetic couplings between different parts of the grounding wires are included using effective per-unit length parameters (l, c, and g), which are space and time dependent. The present model can predict both the effective length and the transient voltage of grounding electrodes accurately, while, an uniform transmission line approach with electrode length dependent per-unit length parameters [19]-[22] fails to predict the same. Unlike the circuit theory approach [1]-[4], the present model is capable of predicting accurately the surge propagation delay in the large grounding system. The simulation results for buried horizontal wires and grounding grids based on the present model are in good agreement with that of the circuit and electromagnetic field approaches [3], [9]. From an engineering point of view, the model presented in this paper is sufficiently accurate, time efficient, and easy to apply.
Due to the increase of generation sources in distribution networks, it is becoming more and more complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady state models of unbalanced active distribution networks by the use of dynamic measurements (time series) from PMUs. As PMU measurements may contain errors and bad data, the paper presents the application of a Kalman Filter technique for real-time data processing. In addition, PMU data captures the power system’s response at different time-scales, which are generated by different types of power system events; the presented Kalman Filter has been improved to extract the steady state component of the PMU measurements to be fed to the steady state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.
Pantograph arcing is a common phenomenon in electrified railway systems. This is also a source of broadband-conducted and radiated electromagnetic interference (EMI) for vehicle as well as traction power and signaling systems. In this paper and the companion paper, experimental analyses of pantograph arcing phenomena are presented for dc and ac feeding systems, respectively. Influences of different parameters on dc traction system, such as supply-voltage polarity, relative motion between pantograph and overhead contact wire, namely, forward motion along the track (longitudinal), and lateral sliding motion of the pantograph (zigzag) are presented here. From the voltage and current waveforms of the test runs, it is shown that pantograph arcing is a polarity-dependent phenomenon. For the positive-fed overhead traction system, where pantograph is cathode, the supply interruptions due to zigzag motion are comparatively less compared to negatively fed traction systems. As a result, the transients, due to pantograph arcing, are more frequent in negatively fed traction systems. It is found that the arc root movement along the electrode surfaces (pantograph-contact wire) is governed by the relative motion and polarity of the electrodes. The analyses presented in this paper also form a foundation to understand the pantograph arcing process and the corresponding influential parameters with the ac supply presented in the companion paper. The findings presented in this paper could be beneficial for coming up with appropriate mitigation techniques from the EMI due to pantograph arcing in dc-fed traction systems.
Pantograph arcing with ac supply generates transients, cause asymmetries and distortion in supply voltage and current waveforms and can damage the pantograph and the overhead contact line. The asymmetry generates a net dc component and harmonics, which propagate within the traction power and signalling system and causes electromagnetic interference. Unlike dc-fed systems (Part I), the arcing in ac supply is complex because of the zero crossing of currents and voltages. In this paper, we discuss the mechanisms of sliding contact and arcing between pantograph-contact wire using the experimental setup described in Part I. Influences of various parameters and test conditions on arcing phenomenon and their signature patterns on the supply voltage and current waveforms are presented. It is shown how the arcing mechanism and corresponding asymmetry in the voltage and current waveforms are governed by line speed, current, supply voltage, inductive load, and pantograph material. The asymmetry in the current waveform is mainly due to the difference in the duration of successive zero-current regions and uneven distortion of the waveshapes. This, in turn, creates the asymmetry in the voltage waveform. The findings presented in this paper could be beneficial for coming up with appropriate mitigation techniques from the electromagnetic interference due to pantograph arcing in ac traction systems.
Dielectric spectroscopy (dielectric response measurements) has been applied for nondestructive estimation of humidity in oil-paper cable insulation. The experiments have been based upon two field-aged cables, 20 and 50 years old. Paper samples from these cables have been characterized and subjected to environments with different relative humidity. Dielectric loss and capacitance have been measured in a frequency range 1 MHz to 1 kHz and related to the moisture content determined by Karl Fisher titration. A method has been verified where the moisture content is correlated to the minimum value of loss tangent (tan 6). A number of field measurements have been performed where the method has been applied to estimate the moisture content in the distribution cables.
Dynamics of water penetration in mass impregnated cable insulation has been studied. For experimental purposes, artificial damage has been inflicted to a 40-cm-long cable sample and water ingress has been continuously monitored by frequency response measurements. A similar experiment has been conducted on 2.8-m-long cable sample, where both frequency response and time-domain reflectometry (TDR) measurements have been performed. After termination of both experiments, actual moisture content has been measured radially and axially. Based on dielectric measurements, a model of water ingress has been developed and diffusion coefficients have been estimated for mass impregnated cable paper.
Wide-area networks (WANs) are being deployed worldwide at power utilities. By replacing vintage narrowband solutions, broadband communications can now be used for an entire utility enterprise. One new possibility that has opened up is to improve power system maintenance. During the last ten years, the benefits of adequate asset management have become increasingly clear in the power industry due to the economic pressures and aging infrastructure. An apparent tool to improve the asset-management strategies is the implementation of information technology systems that support the operational processes. The impact of these systems can be further enhanced by efficient communications, allowing the proliferation of functionality and data access. This paper describes the initial stages in analyzing the combined effects of enhanced communication and maintenance requirements. It provides a useful maintenance categorization, based on empirical data. Also, areas of improvement for power system maintenance are elucidated together with the benefit of enhanced communications.
The purpose of this paper is to present a framework for assessing information security in power communication systems. The framework consists of dividing the communication system to be analyzed into its subcomponents and linking these to relevant evaluation criteria. In this study, the information security standard ISO 17799 has been used as a point of reference to define such evaluation criteria. The framework involves collecting data to evaluate each individual criterion and aggregating these evaluations using a robust algorithm. To cater for the many uncertainties in evaluating information security, the evaluation of the individual subcomponents is aggregated using a Dempster-Shafer based algorithm for evidential reasoning. This algorithm incorporates the many insecure facts and incomplete data that are inherent in large scale systems. The overall result is a set of indicators which highlight the level of information security within a studied communication system. The paper is concluded with a description of a case study in which the framework was applied to a communication system used for automatic meter reading (AMR). Experiences from this application are described in the paper.
This paper examines the enhancement of power system stability properties by use of thyristor controlled series capacitors (TCSCs) and static var systems (SVCs). Models suitable for incorporation in dynamic simulation programs used to study angle stability are analyzed. A control strategy for damping of electromechanical power oscillations using an energy function method is derived. Using this control strategy each device (TCSC and SVC) will contribute to the damping of power swings without deteriorating the effect of the other power oscillation damping (POD) devices. The damping effect is robust with respect to loading condition, fault location and network structure. Furthermore, the control inputs are based on local signals. The effectiveness of the controls are demonstrated for model power systems.
This paper examines improvement of power system dynamics by use of unified power flow controller (UPFC), thyristor controlled phase shifting transformer (TCPST) and thyristor controlled series capacitor (TCSC). Models suitable for incorporation in dynamic simulation programs for studying angle stability are analysed. A control strategy for damping of electromechanical power oscillations using an energy function method is derived. The achieved control laws are shown to be effective both for damping of large signal and small signal disturbances and are robust with respect to loading condition, fault location and network structure. Furthermore, the control inputs are easily attainable from the locally measurable variables. The effectiveness of the controls are demonstrated for model power systems.