+48 22 5128 100  

Szachowa 1,
04-894 Warszawa

Written by Webmaster

Table of content


This issue is not devoted to a single subject, and includes papers on subjects belonging to several distinct fields of communications and information technology:

  • radio communications,
  • optical networks,
  • information technology, software and data processing, and
  • postal services.

Papers related to radio technologies begin with the An Improved Downlink MC-CDMA System for Efficient Image Transmission by M. S. Bendelhoum, A. Djebbari, I. Boukli-Hacene and A. Taleb-Ahmed. The authors attempt to optimize compression of images transmitted over a multi-carrier radio link having a limited bandwidth and varying transmission quality affected by noise. After extensive computer simulations, they have found that image compression using the Discrete Wavelet Transform (DWT) technique, in conjunction with the set Partitioning in Hierarchical Trees (SPIHT) coder gives the best results when the compressed image is sent over a Multi-Carrier Code Division Multiple Access (MC-CDMA) wireless network with a limited signal-to-noise ratio, where selective degradation of one of parallel data streams is frequently experienced. Monochrome photograph compression rates of up to 90% are possible.

The next two papers deal with different aspects of the beamforming technology being currently introduced in wireless networks.

The Multi-User Multiple Input Multiple Output (MU-MIMO) technology is very promising for high capacity wireless networks, including Wireless Local Area Networks (WLANs) defined in the IEEE 802.11ac standard, and for the future 5G networks. However, the full advantage of MU-MIMO can be experienced only with proper user selection and scheduling. User scheduling is done after acquisition of Channel State Information (CSI) from all users, and the number of CSI requests grows with number of active users, resulting in a rising CSI overhead and in degradation of the overall network throughput. In the paper titled QoS-based Joint User Selection and Scheduling for MU-MIMO WLANs, D. S. Rao, and V. B. Hency present the Joint User Selection and Scheduling (JUSS) scheme, comparing its performance to other Medium Access Control (MAC) protocols. It was found that JUSS enhances network throughput and prevents contention during CSI feedback period, rendering it superior to other protocols.

The next paper, Synthesis and Failure Correction of Flattop and Cosecant Squared Beam Patterns in Linear Antenna Arrays by H. Patidar, G. K. Mahanti, and R. Muralidharan is devoted to the synthesis of at-top and cosecant squared beam patterns using the firefly algorithm, and single-fault situations in antenna arrays, which distort the desired emission pattern, and the ways to automatically correct it by re-setting operating parameters of other working antennae to make the whole antenna system fault-tolerant. This “recovery" process was simulated in Matlab, with emphasis on reduction of side lobe level, ripple and the reflection coefficient. Simulations showed a successful application of the firefly algorithm for this purpose. Such a recovery enables uninterrupted operation of complex antenna systems with acceptable performance levels, without having to wait for repair or replacement of the faulty component.

In their paper The Alive-in-Range Medium Access Control Protocol to Optimize Queue Performance in Underwater Wireless Sensor Networks, V. Raina, M. K. Jha and P. P. Bhattacharya present work on optimization of a radio system operating in a special, hostile environment: an underwater wireless sensor network, composed of multiple fixed sensors and a mobile \sink" (interrogator) device collecting data from them in a wireless manner. Seawater is a loss-intensive medium, while both the sensors and the sink operate with severe power limitations, being usually battery-powered. For this purpose, the Alive-in-Range Medium Access Control (AR-MAC) protocol was adopted, with reduction in duty cycle, precise time scheduling of active/sleep cycles of the sensors, and monitoring of the mobility of the sink node, accompanied by selection of appropriate queues and schedulers to save power and prevent loss of priority data.

The field of optical communications is represented in this issue by two papers devoted to improving the reliability of communications over optical fibers and in a free space. In both cases, the applications are in access networks and operating distances are relatively short.

While the Fiber To The Home (FTTH) technology, most often in the PON (Passive Optical Network) variant, is generally considered as the best option for fixed broadband access in terms of transmission performance, FTTH networks are not immune to outages caused by cable cuts. In the paper Availability Analysis of Different PON Models, K. Rados and I. Rados analyze the protection of feeder fiber paths in FTTH-PON access networks to improve network resilience to cable cuts, which tend to be the main source of network failures in the urban environment. However, while protection by adding a spare feeder fiber between the central office (CO) and PON splitter, and including spare OLT active equipment at the CO in some cases, can improve service availability for demanding customers, a significant extra investment in spare fibers laid along separate routes is required. The authors compare the effectiveness of PON protection schemes standardized by ITU-T in several network scenarios. One conclusion is that protection of a short feeder fiber, say 300 m, common in dense urban networks, does not significantly improve service availability, because cable failure rates are proportional to their length.

Reliability of Free Space Optics (FSO) links, sometimes used for short-distance, line-of-sight communications, can be seriously degraded by unfavorable weather conditions like rain, haze and fog. The paper Relay-assisted WDM-FSO System: A Better Solution for Communication under Rain and Haze Weather Conditions by N. Dayal, P. Singh, and P. Kaur includes an analysis (relying on computer models) of how the availability of an FSO link can be improved by inclusion of relays (repeaters or optical amplifiers) to compensate for additional attenuation caused by fog, rain, etc. As in the case of the FTTH network described in the previous paper, this improvement requires additional expenditures on equipment and installation work.

The next group of five papers covers diverse IT and software issues.

First, a novel variant of cloud computing, involving a cloud of smartphones and other mobile devices instead of a set of stationary data centers, is proposed by L. Siwik, D. Kala, M. Godzik, W. Turek, A. Byrski, and M. Kisiel-Dorohinicki in their paper Mobile Cloud for Parallel and Distributed Green Computing. While mobile devices are definitely no match, in terms of computing power and 24/7 availability, for dedicated servers and data centers employed in more conventional cloud computing, they have certain advantages, such as location awareness, potentially large number of devices comprising the cloud, some sensor functionality and the ability of ad-hoc assembly to work on a local, time-critical problem. An actual computing cluster was constructed using dedicated software and its scalability and efficiency were measured. Whether the idea will catch on, nobody knows, but it is definitely new.

Two subsequent papers are devoted to problem solving and optimization issues.

The Monte Carlo Tree Search Algorithm for the Euclidean Steiner Tree Problem by M. Bereta presents a novel Monte Carlo Tree Search algorithm for solving the problem of minimal Euclidean Steiner tree on a plane. The goal is to find connections between all the given points (terminals) on a plane, such that the sum of the lengths of edges is as low as possible, while an addition of extra points (Steiner points) is allowed. This is a very important problem in the design and optimization of telecom cable networks, water, gas and electric power distribution systems, etc., where the construction expenses, demand for materials, failure rates, and maintenance costs are approximately proportional to the total length of routes in a given network. The new algorithm combines Monte Carlo Tree Search with proposed heuristics and works better than both the greedy heuristic and pure Monte Carlo simulations. The results of numerical experiments for randomly generated and benchmark library problems (from OR-Lib) are presented and discussed.

The main drawback of the Batch Back Propagation (BBP) algorithm, widely used in neural network training, is that training is slow and that several parameters must be adjusted manually. The BBP algorithm also suffers from saturation training, finding a local minimum instead of a global one. M. S. Al Duais and F. S. Mohamad in their paper Dynamically-adaptive Weight in Batch Back Propagation Algorithm via Dynamic Training Rate for Speedup and Accuracy Training present an attempt to improve the speed of training and avoid the saturation effect. A Dynamic Batch Back Propagation (DBBPR) algorithm with a dynamic training rate is introduced. Results of Matlab simulations show that the new algorithm is much better than BBP in terms of training speed and accuracy.

Software testing is getting more and more complicated and time consuming with expansion of the size and functionality of the programs assessed. One of the widely adopted methods is the so called “mutant testing”, where small, random changes (mutations) are made to the code which is later executed in parallel with the unmodified version, and their operation is compared. This technique is analyzed by L. T. My Hanh, N. T. Binh, and K. T. Tung in the paper titled Parallel Mutant Execution Techniques in Mutation Testing Process for Simulink Models. The authors propose three strategies for parallel execution of mutants on multicore machines using the Parallel Computing Toolbox (PCT) with the Matlab Distributed Computing Server, and demonstrate that the computationally intensive software testing schemes, such as mutation, can be facilitated by employment of parallel processing.

The popularity of online services involving money and sensitive data: banking, commerce, database access, remote work, system management, etc., brings with it the familiar and dreadful issue of computer crime and password theft. Various methods of generating and entering more secure passwords are being proposed and tested, including rather complicated biometric techniques. However, there is a trade-off between security and complexity (and error rate) of a given password scheme, so a relatively simple solution is being looked at as well. One of them is entering a conventional password being a classic string of characters, but with a variable (long and short) duration of breaks between them. The corresponding password recognition process is time-sensitive, and stealing the character string alone (as performed by existing spying software) is not enough for a successful login, and the attack is immediately detected. This technique is presented by K. W. Mahmoud, K. Mansour and A. Makableh in the paper Detecting Password File Theft using Predefined Time-Delays between Certain Password Characters. However, currently this is only a proposal in need of verification in a real environment, and one must state that writing a spyware capable of recording relative time spacing between characters will be relatively simple.

While traditional postal services appear to be more and more outdated in the 21st century, being steadily displaced by e-mail and other forms of electronic communications, they are still in demand, in particular for some business and advertising applications. For business customers, it is the quality of postal service which counts, and this quality needs to be monitored using standardized methods. The last paper Evolution of Measurement of the Single Piece Mail Transit Time by R. Kobus and F. Raudszus presents how the measurement of the transit time (the main parameter of interest in judging the quality of domestic and cross-border postal services) of priority mail for home and small business customers in the E.U. has been performed and how it evolved since its introduction in 1994. The test is  performed with a set of letters, posted and received by an independent test panel, which are addressed to multiple geographical segments. However, test details must be adjusted in the case of small member states with limited volume of postal traffic and special local conditions. Testing can be made cheaper and more accurate if the test letters are replaced with monitoring of real mail in transit. Alas, mail processing is not completely automated yet, making such a move premature.

Krzysztof Borzycki

Associate Editor

mnisw-logo_enImprovement of language quality; Assigning DOIs; Subscription to the plagiarism detection system – tasks financed under 556/P-DUN/2017 agreement from the budget of the Ministry of Science and Higher Education under the science dissemination fund.