Volume 9, Number 1, April 2011


Computing and Information Sciences is a peer reviewed journal that is committed to timely publication of original research, surveying and tutorial contributions on the analysis and development of computing and information science. The journal is designed mainly to serve researchers and developers, dealing with information and computing. Papers that can provide both theoretical analysis, along with carefully designed computational experiments, are particularly welcome. The journal is published 2-3 times per year with distribution to librarians, universities, research centers, researchers in computing, mathematics, and information science. The journal maintains strict refereeing procedures through its editorial policies in order to publish papers of only the highest quality. The refereeing is done by anonymous Reviewers. Often, reviews take four months to six months to obtain, occasionally longer, and it takes an additional several months for the publication process.

Paper 1: GBARMVC: Generic Basis of Association Rules based approach for Missing Values Completion

Towards Making WSRF Based Web Services Strongly Mobile

Leila Ben Othman and Sadok Ben Yahia

Abstract: When tackling real-life datasets, it is common to face the existence of scrambled missing values within data. Considered as ”dirty data”, it is usually removed during the pre-processing step of the KDD process. Starting from the fact that ”making up this missing data is better than throwing it away”, we present a new approach trying to complete the missing data. The main singularity of the introduced approach is that it  sheds light on a fruitful synergy between generic basis of association rules and the topic of missing values handling. In fact, beyond interesting compactness rate, such generic association rules make it possible to get a considerable  reduction of conflicts during the completion step. A new metric called ”Robustness” is also introduced, and aims to select the robust association rule for the completion of a missing value whenever a conflict appears. Carried out experiments on benchmark datasets confirm the soundness of our approach. Thus, it  reduces conflicts during the completion step while offering a high percentage of correct completion accuracy.



Paper 2: Dynamic Load-Balancing Based on a Coordinator and Backup Automatic ...

Dynamic Load-Balancing Based on a Coordinator and Backup Automatic Election in Distributed Systems  

Tarek Helmy and Fahd S. Al-Otaibi

Abstract: In a distributed system environment it is likely that some nodes are heavily loaded while others are lightly loaded or even idle. It is desirable that the work-load is fully distributed among all nodes so as to utilize the processing time and optimize the whole performance. A load-balancing mechanism decides where to migrate a process and when. This paper introduces the load-balancing mechanism as a new scheme to support the reliability and to increase the overall throughput for distributed systems environment. The idea is to assign one node as a coordinator in addition to a backup node, with the possibility of automatic election in case both coordinator and backup fail. The presented scheme has been integrated into a Zap system. Zap provides a transparent checkpoint-restart mechanism for migrating a PrOcess Domain (POD). A POD provides a group of processes with a private namespace that presents the process group with the same virtualized view of the system. Experimental results show that the load among all nodes is balanced and the freezing time is low compared with other load-balancing mechanisms such as random selection of the destination, unless the number of communication messages needed for migrating a POD becomes high.



Paper 3: Design of Normalized Relation: An Ameliorated Tool

Design of Normalized Relation: An Ameliorated Tool

Ajeet A. Chikkamannur and Shivanand M. Handigund

Abstract: The “normalization” is a practice used to design the relation(s) for a good database eliminating undesirable functional dependencies amongst that exist amongst attributes of the relation. The complexities involved in the normalization of relations, have mowed down vendors from automating the normalization processes. Although the keyword normalization is existing in the data manipulation language of Structured Query Language (SQL) standard. This paper unravels the complexities involved in the normalization process and proposes an automatic methodology for refining the relations with normalization. The primary key for each relation is designed based on the superset of minimum attribute(s), which uniquely determines other attribute values of the tuple in the relation. Utilizing the blend of analytical and synthetic approaches, the proposed implementation process forms and refines the relations grouping (with the use of axioms) the desirable  functional dependencies of the relation to satisfy the  first, second and  third  normal form rules.



Paper 4: High Level Optimized Parallel Specification of a H.264/AVC Video Encoder

High Level Optimized Parallel Specification of a H.264/AVC Video Encoder

Hajer Krichene Zrida, Ahmed C. Ammari, Abderrazek Jemai, Mohamed Abid

Abstract: H.264/AVC (Advanced Video Codec) is a new video coding standard developed by a joint effort of the ITU-TVCEG and ISO/IEC MPEG. This standard provides higher coding efficiency relative to former standards at the expense of higher computational requirements. Implementing the H.264 video encoder for an embedded System- on-Chip (SoC) is thus a big challenge. For an efficient implementation, we motivate the use of multiprocessor platforms for the execution of a parallel model of the encoder. For this purpose, we proposed a high-level parallelization approach for the development of an optimized parallel model of a H.264/AVC encoder for embedded SoCs. This approach is used independently of the architectural issues of any target platform. It is based on the exploration of the task and data levels forms of parallelism simultaneously, and the use of the parallel Kahn process network (KPN) model of computation and the YAPI programming C++ runtime library. To demonstrate the effectiveness of the obtained parallel model of the H.264 encoder, the encoding performances have been evaluated by system-level simulations targeting multiple multiprocessors platforms.




Paper 5: Unifying Speech Resources for Tone Languages: A Computational Perspective

Unifying Speech Resources for Tone Languages: A Computational Perspective

Moses E. Ekpenyong, Eno-Abasi E. Urua, Victor J. Ekong, Okure U. Obot and Imelda I. Udoh


Abstract: In this paper, we propose a computational approach to unifying speech resources for tone languages. The paper proceeds in three folds: (i) it provides background and tutorial on one of the speech resources (the Ibibio speech synthesizer), discussing the synthesis development phases based on working experience using the Festival Text-to-Speech (TTS) system; (ii) it suggests ways of sustaining the synthesis development with perspective and infrastructural solution that will not only make the synthesizer more intelligible but also easily replicable for other tone languages; (iii) it introduces the archiving of a new language resource (the talking Medefaidrin Web dictionary). These resources require standardization to make them interoperable for a unified documentation. An architectural design for unification of the resources is finally presented to further this innovation. It is hoped that this paper would spin-off interests and further collaborative ties that will expand our network horizons and ensure a dependable information base for archiving tone language resources.




Prof. Jihad Mohamad Alja'am 
Email: journal.editor.ijcis@gmail.com

The Journal Secretary
Eng. Dana Bandok
Ontario, Canada 
Email: sec.ijcis@gmail.com 


Home Page »