A Cost Model for Heterogeneous Skeletons for CPU/GPU Systems

Abstract

Algorithmic skeletons are widely used to manage multi-processor computations but are most effective when deployed for regular problems on homogeneous systems, where tasks may be divided evenly without regard for processor characteristics. With the growth in heterogeneity, where a multicore is coupled with GPUs, skeletons become layered and simple task distribution becomes sub-optimal. We explore heterogeneous skeletons which use a simple cost model based on a small number of key architecture characteristics to find good task distributions on heterogeneous multicore architectures. We present a new extension to an existing skeleton library associated cost model that enable GPUs to be exploited as general purpose multi-processor devices in heterogeneous multicore/GPU systems. The extended cost model is used to automatically find a good distribution for both a single heterogeneous multicore/GPU node, and clusters of heterogeneous multicore/GPU nodes.

  • Keywords:Parallel, Skeleton, Heterogeneous, Cost model, multicore, GPU,Algorithms, Design, Performance
  • Authors:Khari A. Armih, Mustafa K. Aswad
  • 0
  • 0

Prediction of short-term load using artificial neural networks

Abstract

Neural network can be used for solving the particular problems which are difficult to solve by the human beings or the conventional computational algorithms. The computational meaning of the training comes down to the adjustments of certain weights which are the key elements of the artificial neural netwrok. This is one of the key differences of the neural network approach to problem solving than conventional computational algorithms. This adjustment of the weights takes place when the neural network is presented with the input data records and the corresponding target values. Due to the possibility of training neural networks with off-line data, they are found useful for power system applications. This paper focus on shortterm load prediction using three types of neural networks. An accurate short-term forecasting method for load of electric power system can help the electric power system operator to reduce the risk of unreliability of electricity supply. On this paper, radial basis function neural netwrok (RBFNN), Nonlinear Autoregressive model with eXogenous input neural network (NARXNN), and backpropagation neural network (BPNN) were developed to predict the short-term load. Simulation results show that the three types of neural networks can predict the load efficiently. The neural network simulation resultswere implemented using the Matlab program. The accuracy of load prediction using the neural networks was investigated and assessed in terms of mean absolute percentage error (MAPE).

  • Keywords:Short-term load prediction, neural networks, RBFNN, NARXNN, BPNN, MAPE, Matlab.,
  • Authors:A. Shebani
  • 0
  • 0

Development and Applications Current Driven Bulk Current Mirrors

Abstract

My work starts with designing a current mirror for low-voltage low power applications. The current source/sink is a basic building block in capacitance Metal-Oxide-Semiconductor-Transistors Integrated Circuit (CMOS IC) design and is used extensively in analog integrated circuit design. Ideally, the output impedance of a current source/sink should be infinite and capable of generating or drawing a constant current over a wide range of voltages. However, finite values of output resistance( ݎ( ଴and a limited output swing required to keep devices in saturation will ultimately limit the performance using the conventional gate-driven current mirror and the low-voltage lowpower bulk-driven current mirrors and comparative study between them.

  • Keywords: current mirror, Metal-Oxide-Semiconductor-Transistors (MOSTs), The Bulk-driven and Gate-Driven MOSFTs,
  • Authors:Abedalhakem Alkowash, Imhammad Abood, Abdualbaset Asahi
  • 0
  • 0

Sematic of Parallel Primitive Haskell Programming Language

Abstract

Nowadays, heterogeneous multi-core has become the mean-stream computer architecture. It emerging hundreds of cores and combining accelerators like GPUs on traditional chip. Programming software models need to exploit all the resources at the hand of a programmer with minimum efforts. This paper describes the Semitics of parallel primitives of Haskell Programming Language. Primitives are exploiting the underlying architecture resources. New primitive added to a programming language we need to prove its expected behavior. This paper specified the expected behaviors of the constructs by Haskell functions, achieving an executable specification. We have formulated several properties as Haskell predicates and used Quickcheck to check them on random input. The three basic properties represent sanity checks of the semantics. Two proposed implementation relevant properties did not hold, and counter examples extracted from Quickcheck identified diffusion of sparks to be the problem. In the implementation, we avoided this problem by resetting the boundaries after one fishing stage. The final property, checked with Quickcheck, shows that with this modification, the desired property holds

  • Keywords:,
  • Authors:Dr. Mustafa Kh. Aswad
  • 0
  • 0

Performance Evaluation of solar cells after 31 Years of Work

Abstract

Crystalline silicon photovoltaic modules are being used for long time in many photovoltaic applications. It was not expected that photovoltaic modules of old technology will last for twenty years. In contrast, a photovoltaic system which was installed in 1979 in the Libyan Desert is still running with a little decrease in the output power and small changes in its designed parameters. The study goal is to evaluate the performance of thirty-year old crystalline silicon cells under Standard test condition. Some of the solar cells were dismantled from one of the photovoltaic modules which have been working for more than thirty years in order to test and measure their current-voltage curve. This paper presents the measuring results of indoor measurements on the thirty-year old solar cells

  • Keywords:standard test condition, Qualification and Testing,solar cells ,
  • Authors:Mohamed A. S. Alshushan, Ibrahim M. Saleh
  • 0
  • 0

Develop a new method for detailed wheel and rail roughness measurements using replica material and Dektak profilometer

Abstract

Wheelsets are one of the most expensive components through the life of a rail vehicle. They require regular maintenance activities such as reprofiling on a wheel lathe, inspection for safety-critical damage to wheel and axle, and renewal of wheelset. There are several reasons for reprofiling such as tread wear, flange wear, and thermal; while, the cost of changing damaged rails is much greater than that of changing any other damaged part of track. The wheel and rail damage has been a concern in railway systems for several decades. The change of wheel profile and rail profile makes a large contribution to track maintenance cost. The develop a new method to measure the wheel/rail surface roughness can assist to improve the design of wheel and rail profiles, where the wheel/rail surface roughness is correlated to wheel/rail safety and economy. Therefore, the main aim of this work is to develop a new method for measuring wheel/rail roughness parameters using Dektak profilometer and replica material. The replica technique is very useful for situations in which it difficult to get to the surface in order to measure it, such as when the specimen is large; it is also useful when the components change due to wear and mechanical actions and the record of the original surface is needed. In this paper, the replica material which was applied to the wheel and rail surfaces of the twin disc test rig to make a copy of wheel and rail; then, the replica samples were measured using Dektak profilometer and the results were processed to establish wheel and rail roughness parameters.

  • Keywords:wheel and rail roughness, measurements, Dektak profilometer, replica material, twin disc rig.,
  • Authors:A. Shebani,C. Pislaru
  • 0
  • 0

Implementation of Substitution Cipher on Field Programmable Gate Arrays

Abstract

This paper presents algorithms implementation of substitution cipher on a FPGA “Field Programming Gate Arrays”, FPGA provides faster data rate, more flexibility to make changes to the programme and better physical security than other hardware.The design was coded, simulated and tested by MATLAB /Simulink and Xilinx system generator and implemented on Xilinx Spartan 3A DSP XC3SD3400A -4CSG84C hardware implanted and test by using Xilinx ISE 12.4.

  • Keywords: Decryption, Encryption, FPGA, MATLAB, Substitution Cipher, Xilinx. ,
  • Authors:Ali F. Kaeib,Osama A. S. Abourodes
  • 0
  • 0

A Comparative Study of Lossless Data Compression Techniques

Abstract

Data compression is a process that reduces the data size, removing the excessive information and redundancy. It is a common and important requirement for most of the computerized applications, it can shorter the data size, which lead to cost reduction. The main purpose of data compression is to remove data redundancy from the store or transmitting data, it is also an important application in file storage field and distributed system. Data compression techniques are can be used in different data formats such as text, audio, video and image files. The aim of the study is to compare between many of the Lossless data compression techniques and compare their performance. There are many techniques of data compression and they can be categorized as Lossy and Lossless Compression methods. In this study a Run-length encoding, Huffman Coding, Shannon-Fano Coding, and LZW Encoding algorithm were used, their performance were compared by using data compression in the text format, the compression ratio, compression factor and saving percentage were calculated. Compression ratio in Huffman coding and Shannon-Fano coding where less than Run-length encoding and LZW encoding (38%, 40%, %81, 74%) respectively, while compression factor where higher than Run-length encoding and LZW encoding (2.63, 2.48, 1.23, 1.35) respectively, the results of saving percentage by using Huffman coding and Shannon-Fano coding where higher than Run-length encoding and LZW encoding (62%,60%, 19%, 26%) respectively. The study pointed to the effectiveness of Huffman and Shannon-Fano coding reducing the size of the files compare to other algorithms

  • Keywords: Data compression, Lossless data compression technique, Huffman Coding, Run length encoding, Shannon-Fano coding, LZW encoding. ,
  • Authors:Elham Yakhlef Abushwashi,Hamida Aboulqasim Oushah
  • 0
  • 0

A CIS Framework for Libyan District

Abstract

In the last decades, the proliferation of digital data and the availability of digital map, and the use of geographic information system (GIS) has become the best technique to develop the cadastral information system (CIS). The digital cadastral database (DCDB) that shows real coordinates for cadastral maps is hampered by many land laws in the country. We present a new Framework for developing CIS applications to assist real estate registration. The new framework can be used as a guide to developers helping them in creating a plan of development and defining the system requirements. A cadastral web mapping solution for a Libyan district (CWMSLD) is developed using the proposed framework. CW-MSLD System based on a pilot case study in the capital city of Libya. The prototype is developed using modern GIS techniques (Web Mapping). Web Mapping techniques make maps and geo-information available to groups of end users through a web page. The prototype tool triggers the map server software which integrates the map data stored in DCDB with the land register data stored in the database. The information derived from thesystem can be used to register or transfer ownership for the cadastral map and further issue a cadastral certificate for the registered cadastral (real estate).

  • Keywords: GIS, Cadastral, Digital Mapping, Framework, Land Registration,General Terms: Algorithms, Design
  • Authors:Amal O. Abdulghader,Khari A. Armih
  • 0
  • 0