This issue of the Journal of Telecommunications and Information Technology is dominated by a set of papers presenting theoretical studies and computer simulations devoted to optimization of several types of multiuser wireless systems.
This part begins with two papers written by M. Addad and A. Djebbari. The ﬁrst one, titled A New Code Family for QS-CDMA Visible Light Communication Systems, presents new codes for broadband Visible Light Communications (VLC) – a method relying on modulated light emitted by light emitting diodes (LEDs) for short-range data transmission in various environments. Optical Code Division Multiple Access (OCDMA) is a strong candidate for VLC applications. Multiple access interference is the predominant source of bit errors, as it causes transmission errors when system users are transmitting asynchronously. In this case, the so-called Zero Cross-Correlation (ZCC) codes (orthogonal sets of sequences where no overlapping of ones occurs and a zero zone exists) ensure the best system performance, also in the presence of synchronization problems and multipath propagation.
The same authors look, in the second paper titled Suitable Spreading Sequences for Asynchronous MC-CDMA Systems, at ways to improve performance of the multi-carrier Code Division Multiple Access (MC-CDMA) technology which combines Orthogonal Frequency Division Multiplexing (OFDM) and Code Division Multiple Access (CDMA), and is frequently used in radio communication systems. However, an MC-CDMA network serving a large number of users suﬀers from a high Peak-To-Average Power Ratio (PAPR), causing poor utilization of transmitter power. In addition, asynchronous MC-CDMA suﬀers from the eﬀect of multiple access interference. Both harmful eﬀects shall be reduced for maintaining a low error rate. After computer simulations, it was concluded that Zero Correlation Zone (ZCZ) sequences are the most suitable spreading sequences.
The issue of reducing PAPR and eﬃciently utilizing transmitter power in OFDM systems is analyzed also by Y. Aimer, B. Seddik Bouazza, S. Bachir, and C. Duvanaud in their paper titled Interleaving Technique Implementation to Reduce PAPR of OFDM Signal in Presence of Nonlinear Ampliﬁcation with Memory Eﬀects. The authors show that it is possible to interleave multicarrier signals to prevent grouping of errors (which cannot be removed by means of forward error correction), to use null subcarriers to transmit side information to the receiver, with the said transmission augmented with a new method for coding interleaver keys at the transmitter and a robust decoding procedure at the receiver. Simulations of such a WLAN 802.11a system, including a nonlinear power amplifier with memory, indicate a reduction of PAPR by 5.2 dB.
The next paper, titled Using Least Mean p-Power Algorithm to Correct Channel Distortion in MC-CDMA Systems, by M. Zidane, S. Safi, M. Sabri, and M. Frikel, is also about MC-CDMA 4G mobile radio systems, but focuses on adaptive downlink equalization to compensate for channel distortion in terms of the bit error rate, investigated analytically using the Least Mean p-Power Algorithm (LMP). The results of numerical simulations, performed for various values of signal-to-noise ratio and the p threshold, show that the presented algorithm is able to simulate the standard BRAN C channel measured with different accuracy levels.
There are many novel technologies that are proposed for inclusion in 5G wireless networks. One of them is the device-to-device (D2D) communications: a direct communication between two or more user devices across a short distance without participation of the base station. D2D can ensure more efficient handling of certain types of mobile data traffic, but produces more interference, as it uses the same frequency band as the underlying cellular network. In their study Interference Management Using Power Control for Device-to-Device Communication in Future Cellular Networks, T. A. Nugraha, M. P. Pamungkas, and A. N. N. Chamim investigate the use of adaptive power control to mitigate such interference. Simulations show that the signal to interference plus noise ratio (SINR) can be improved by 0.5–1 dB compared to operation at a fixed power level.
Errors produced by noise and adverse propagation phenomena in wireless and wired communication systems render the use of error correcting codes mandatory in many cases. Their implementations, however, tend to be complex. In their paper titled Low Density Parity Check Codes Constructed from Hankel Matrices, M. A. Tehami and A. Djebbari present a new technique for constructing Low Density Parity Check Codes (LDPC) based on the Hankel matrix and circulant permutation matrices. The new codes are exempt of any cycle of length 4 to ensure low-complexity hardware implementations, with a reduced number of logic gates and the use of simple shift registers. Simulations show that the proposed codes perform very well over additive white Gaussian noise channels.
Telephone systems must often work in noisy environments, such as interior of a car, train station or a place where other persons speak at the same time. Several methods of speech enhancement relying on digital signal processing to remove noise and improve intelligibility have been developed. None of them, however, is equally effective in different conditions, especially when the interfering sound is of the non-stationary nature. In the paper titled Incoherent Discriminative Dictionary Learning for Speech Enhancement, D. Shaheen, O. Al Dakkak, and M. Wainakh have proposed and tested a new Incoherent Discriminative Dictionary Learning (IDDL) algorithm to model both speech and noise, where the cost function accounts for both “source confusion” and “source distortion” errors, with a regularization term that penalizes the coherence between speech and noise sub-dictionaries. At the enhancement stage, sparse coding is used on the learnt dictionary to estimate both clean speech and noise spectrum. Finally, the Wiener filter is used to refine the clean speech estimate. Experiments on the Noizeus dataset, demonstrated that the proposed algorithm outperforms other dictionary learning speech enhancement algorithms: K-SVD, GDL and FDDL, using two objective measures: frequency-weighted segmental SNR and Perceptual Evaluation of Speech Quality (PESQ) in the presence of structured non-stationary noise, but not white noise.
The next two papers are devoted to cognitive radio, a promising technology capable of adaptive and more efficient use of the scarce spectrum available currently, by additional or “secondary” users, when the primary (licensed) users do not transmit. However, this requires new methods of spectrum allocation and reception of signals in adverse conditions to be applied.
In the paper titled Interference Aware Routing Game for Cognitive Radio Ad-hoc Networks by S. Amiri-Doomari, G. Mirjalily and J. Abouei, an interference-aware routing game is proposed that connects flow initiators to the destinations. A network formation game among secondary users is formulated in which each secondary user aims to maximize its utility, and to reduce the aggregate interference on the primary users and the end-to-end delay. In order to reduce end-to-end delay and the accumulated interference, the new algorithm selects upstream neighbors, looking from the point of view of the sender. It avoids congested network zones and forms at least one path from the flow initiator to the destination. To model interference between secondary users, a signal-to-interference-plus noise (SINR) model is employed. Numerical simulations show that the proposed algorithm works better than Interference Aware Routing (IAR) in cognitive radio mesh networks, with fewer hops between the initiator and the destination required.
Optimization of receiver designs for cognitive networks is the subject of the paper titled Theoretical Investigation of Different Diversity Combining Techniques in Cognitive Radio by R. Agarwal, N. Srivastava and H. Katiyar. The authors compare the performance of energy detectors in cognitive radio using different diversity combining techniques. While the Maximal Ratio Combining (MRC) receiver works best, it is complex and expensive, so less complex combining techniques are preferred, such as switched diversity. Two such schemes were analyzed: Switch Examine Combining (SEC) and Switch Examine Combining with post examining selection (SECp). General formulas for the probability of detection using MRC, SEC and SECp diversity combining techniques over the Rayleigh fading channel were derived for various numbers of branches, and a trade-off between detection performance and receiver complexity was observed.
In their article titled Swarm Intelligence-based Partitioned Recovery in Wireless Sensor Networks, G. Kumar and V. Ranga look at the ways to improve reliability and resilience of heterogeneous wireless sensor networks, because battery-powered sensor nodes operate in a hostile, noisy environment and fail frequently (usually due to a discharged battery) or lose connections to other nodes. A network partition recovery solution called Grey Wolf was presented, which is an optimizer algorithm for repairing segmented heterogeneous wireless sensor networks. This solution provides strong bi-connectivity in the damaged area, but also distributes traffic load among the multiple deployed nodes to enhance the repaired network’s lifetime. Computer simulations show that the Grey Wolf algorithm offers considerable performance advantages over other approaches.
Concluding this issue of JTIT is the paper titled Non-crossing Rectilinear Shortest Minimum Bend Paths in the Presence of Rectilinear Obstacles by Shylashree Nagaraja, in which a new algorithm to determine the shortest, non-crossing, rectilinear paths in a two-dimensional grid graph is presented. The shortest paths are determined which do not cross each other and bypass all obstacles. This is useful in the design of integrated circuits and printed circuit boards, in the routing of traffic in wireless sensor networks, etc. When more than one equal length non-crossing path is present between the source and the destination, the proposed algorithm selects the path which has the least number of corners (bends) along the path. In this method, the grid points are the vertices of the graph and the lines joining the grid points are the edges of the graph. The obstacles are represented by their boundary grid points. Once the graph is ready, an adjacency matrix is generated and the Floyd-Warshall all-pairs shortest path algorithm is used iteratively to identify the shortest, non-crossing paths. To get the minimum number of bends in a path, the author made a modification of the Floyd-Warshall algorithm, which is a novel element.