Book Title: Proceedings of the 2017 Future Technologies Conference (FTC)
Dates: 29-30 November 2017
Year: 2017
Location: Vancouver, Canada
Publisher: The Science and Information (SAI) Organization
Abstract: The paper discusses possible cloud-based Information Rights Management (IRM) model extension with enhanced accountability for both a sticky policy and an attached data. This work compliments research on secure data sharing with Office Open XML (OOXML) package extended by a sticky policy in eXtensible Access Control Mark-up Language (XACML) format. Research used Identity Based Encryption (IBE) primitive to securely bind the policy and the data together. High availability required from cloud service is here achieved using distributed system components. The Git repository and the Blockchain, leveraged technologies are not new, however their application for IRM system is novel and brings it closer to universal approach and open architectural construct.
Authors: Grzegorz Spyra, William J Buchanan, Elias Ekonomou Full Text
Keywords: Sticky policies; git repositories; Blockchain
Abstract: The latency and throughput of blockchain-based cyrptocurrencies is a major concern for their suitability as mainstream currencies and as transaction processors in general. The prevalent proof-of-work scheme, exemplified by Bitcoin, is a deliberately laborious effort: the time and energy required to mine blocks makes the blockchain virtually immutable and assists in the consensus-reaching process. Coinspermia (coin=money + spermia=seed) is a different approach: transactions are concurrently seeded throughout a network of peer nodes to an extent sufficient to achieve a high reliability of essential currency operations, including the fast transfer of coins from an owner to a recipient, and the prevention of double spending. A number of Bitcoin features are retained in Coinspermia, including transaction input-outputs and cryptographic addresses and signing, but no special proof-of-work is required to commit transactions. Instead, a client can be assured of an operation completion when a quorum of network nodes acknowledge the operation, which can occur before a transaction operation finishes propagating through the network. Simulation substantiates improved latency and throughput.
Authors: Tom Portegys Full Text
Keywords: Cryptocurrency; Bitcoin; peer-to-peer network; distributed transactions; quorum system; blockchain
Abstract: The tremendous technological advancement in the last few decades has brought many enterprises to collaborate in a better way while making intelligent decisions. The use of Information Technology tools in obtaining data of people’s everyday life from various autonomous data sources allowing unrestricted access to user data has emerged as an important practical issue and has given rise to legal implications. Various innovative models for data sharing and management have privacy and centrality issues. To alleviate these limitations, we have incorporated blockchain in user modeling. In this paper, we constructed a decentralized data sharing architecture with MultiChain blockchain in the travel domain, which is also applicable to other similar domains including education, health, and sports. Businesses that operate in the tourism industries such as travel and tour agencies, hotels and resorts, shopping malls etc. are connected to the MultiChain and they share their user profile data via stream in the MultiChain. The paper presents the hotel booking service for an imaginary hotel as one of the enterprise nodes, which collects users’ profile data with proper validation and will allow users to decide which of their data to be shared thus ensuring user control over their data and the preservation of privacy. The data from the repository is converted into an open data format while sharing via stream in the blockchain so that other enterprise nodes, after receiving the data, can easily convert them and store into their own repositories. The paper presents an evaluation of the performance of the model by measuring the latency and memory consumption with three test scenarios that mostly affect the user experience. The node responded quickly in all of these cases building a better and more engaging user experience. The paper also proposes a concept of the smart contract in the form of the finite state in the expanding domain of privacy-preserving data sharing and management.
Authors: Ajay Kumar Shrestha, Ralph Deters, Julita Vassileva Full Text
Keywords: Privacy; user modeling; blockchain; data sharing; stream; latency; memory consumption
Abstract: Blockchains and distributed ledger technology promises trusted and immutable records in a wide variety of use cases involving recordkeeping, including real estate and healthcare. This paper presents a novel framework for evaluating the capability of innovative blockchain-based systems to deliver trustworthy recordkeeping based on archival science-an ancient science aimed at the long-term preservation of authentic records.
Authors: Victoria L. Lemieux Full Text
Keywords: Archival science; authenticity; blockchain; distributed ledger; integrity; reliability; trust
Abstract: Smart contracts are autonomous software executing predefined conditions. Two of the biggest advantages of the smart contracts are secured protocols and transaction costs reduction. On the Ethereum platform, an open-source blockchain-based platform, smart contracts implement a distributed virtual machine on the distributed ledger. To avoid denial of service attacks and monetize the services, payment transactions are executed whenever code is being executed between contracts. It is thus natural to investigate if predictive analysis is capable to forecast these interactions. We have addressed this issue and proposed an innovative application of the tensor decomposi-tion CANDECOMP/PARAFAC to the temporal link prediction of smart contracts. We introduce a new approach leveraging stochastic processes for series predictions based on the tensor decomposition that can be used for smart contracts predictive analytics.
Authors: Jeremy Charlier, Radu State, Jean Hilger Full Text
Keywords: Tensors; CANDECOMP/PARAFAC decomposition; stochastic processes simulation
Abstract: The necessity of wine supply chain traceability system is inevitable due to increase in counterfeiting, adulteration, and use of excessive preservatives and hazardous chemicals. To overcome these issues, wine industry is in need of a traceability system which enables a consumer to verify the composition of each batch of wines from the grape growers to the retailers. However, most of the current systems are RFID and web based and thus it is possible to counterfeit stored information as required. This study proposes a blockchain based wine supply chain traceability system where every transaction is recorded as a block in the chain and is visible to the relevant participants. These blocks of information is immutable since any change to the recorded information will break the chain. In addition to providing quality information management framework, the proposed traceability system enables transparency, safety, and security in the overall process from the grape to the bottle.
Authors: Kamanashis Biswas, Vallipuram Muthukkumarasamy, Wee Lum Tan Full Text
Keywords: Supply chain traceability; blockchain; consensus; miner; transparency
Abstract: Achieving data confidentiality and privacy while maintaining secure access is essential in various fields, including in the medical sector. Implementing a blockchain-based technol-ogy to secure sensitive data ensures that the users own their data and have control over who can access it. While blockchain technology is still in its infancy, it is the cutting-edge of research in many industries and institutions. The decentralized nature of blockchain technology and the presence of smart contracts in Ethereum are two major features that can be utilized to create a novel data sharing and access system that is secure, flexible, and more reliable. In this paper, we investigate the use of smart contracts between heterogeneous blockchains for the purpose of achieving secure interoperability for data sharing and access control. As a proof of concept, we propose and implement a record management system for healthcare data, where access to healthcare providers’ databases is managed through a private blockchain, only available to healthcare providers, and patients access their medical records through a public blockchain. Addi-tionally, we develop a set of smart contracts for each blockchain to control access, manage storage, and enable interoperability between the two blockchains.
Authors: Gaby G. Dagher, Chandra L. Adhikari, Tyler Enderson Full Text
Keywords: Blockchain; Ethereum; smart contract
Abstract: In existing localization research area, most of the localization methods suffer from propagation loss because of multi-path effects and sensitivity of the components of wireless technology, e.g. Received Signal Strength Indicator (RSSI). It is leading to miss estimation of localization methods and degradation of estimation accuracy. In this paper, a new advance localization method is proposed in different point of view. K-nearest neighbor (KNN) algorithm is adopted to get the best estimation accuracy and RSSI is used as main feature for estimating unknown position of mobile user. We extend to estimate a circle region instead of one coordinate position with the aid of circle theory as a new idea. The objective of this research is to improve the accuracy of localization methods and to mitigate the sensitivity of the existing methods. In this study, an advance localization method is proposed by considering existing finger printing method from a different point of view. In addition, Radius value optimization problem is investigated to be able to analyze the effects of different radius values on localization accuracy. On the other hand, the sensitivity of estimation accuracy of proposed method is analyzed in terms of different norm functions.
Authors: San Hlaing Myint, Takuro Sato Full Text
Keywords: Fingerprint localization; machine learning; wireless communication
Abstract: In Cognitive Radio (CR) networks, unlicensed Sec-ondary Users (SUs) can occupy the white spaces of spectrum channels when licensed Primary Users (PUs) do not use them. Hence, for the operation of CR networks, it is critical to coordinate spectrum accesses of SUs and protect the ongoing communication of PUs. In this paper, we propose a new medium access control protocol for SUs, called Mutually Exclusive Guar-anteed Access for Cognitive Radio networks (CR-MEGA). CR-MEGA adopts a dual sensing approach (i.e., carrier sensing and spectrum sensing) to avoid packet collisions with faraway PUs as well as nearby SUs. Our scheme performs well even in the harsh condition with highly active PUs, but the advantage comes with the increased sensing delay. We analyze the throughput and delay of CR-MEGA using the Markov chain model, and investigate the impacts of various parameters with numerical results.
Authors: Muhammad Shafiq, Seonghun Son, Jin-Ghoo Choi, Heejung Yu Full Text
Keywords: Cognitive radio networks; dynamic spectrum access; spectrum sensing; carrier sensing; CSMA/CA
Abstract: In order to model, design, execute, control and analyze co-simulations for e.g. Smart Grids, a new scalable and generic system architecture of an agent-based co-simulation platform framework is presented in this article. Not only different kinds of simulators e.g. for power grids and technical plants, but also various types of data sources and real hardware nodes, such as wind turbines, photovoltaic cells, electrical power grid equipment, etc. which are instrumented by measurement devices, can be seamlessly integrated into the co-simulation platform in order to model large transdisciplinary, multi-domain energy systems. By integrating Apache Kafka as message exchange infrastructure into this configurable co-simulation platform the realistic simulation of SCADA communication via standard communication network protocols and services for Smart Grid including big data scenarios can be realized. As basic approaches container virtualization and microservices, namely, Docker con-tainers and a Representational State Transfer (REST) application programming interface (API), are used as an automated runtime environment to control and manage different simulation nodes on a (larger) computing cluster. The co-simulation platform also provides an easy-to-use web browser-based user interface implemented using Angular2 to allow users to model, implement, perform and operate co-simulations for future energy system solutions without any setup or configuration on their local PC and extensive IT knowledge. Furthermore, after the completion of simulations by the individual nodes, the results of a co-simulation run can be automatically stored in databases and then analyzed and visualized afterwards via a web user interface.
Authors: Jianlei Liu, Clemens Duepmeier, Veit Hagenmeyer Full Text
Keywords: Smart grid; agent-based co-simulation platform; microservice; multi-domain energy system; communication; Rep-resentational State Transfer (REST); big data
Abstract: In the Internet-of-Things, the number of connected devices is expected to be extremely huge, i.e., more than a couple of ten billion. It is however well-known that the security for the Internet-of-Things is still open problem. In particular, it is difficult to certify the identification of connected devices and to prevent the illegal spoofing. It is because the conventional security technologies have advanced for mainly protecting logical network and not for physical network like the Internet-of-Things. In order to protect the Internet-of-Things with advanced security technologies, we propose a new concept (datachain layer) which is a well-designed combination of physical chip identification and blockchain. With a proposed solution of the physical chip identification, the physical addresses of connected devices are uniquely connected to the logical addresses to be protected by blockchain.
Authors: Hiroshi Watanabe Full Text
Keywords: Internet-of-Things (IoT); blockchain; security; physical chip identification (Physical Chip-ID); datachain; connected devices; logical address; physical address
Abstract: The concept of microgrid has become more relevant due to the increase in the use of distributed energy resources. However, the important factors which are to be considered are their ability to control the power flow and stability of the system. Droop control strategy requires no communication system and realizes the “plug and play” function between source and load. Virtual impedance type droop control method is preferred to improve the transient response and power decoupling of conventional droop method. In this paper a state space model of such a system is developed based on small signal disturbances. Eigenvalue analysis of the system is also done and the parameters which determine the stability of the system are identified. The optimum values of these parameters are found out by increasing the stability of the system using particle swarm optimization (PSO) technique. The simulation is done by MATLAB coding and the results show that the values of the optimized parameters will improve the stability of the system by shifting the eigenvalues away from the imaginary axis as far as possible on the left half of s plane.
Authors: Binu Krishnan U, Mija S J, Elizabeth P Cheriyan Full Text
Keywords: Droop controller; microgrid; eigenvalue analysis; particle swarm optimization
Abstract: Studies demonstrate that monitoring and recording movement of rehabilitation exercises can improve the degree of recovery of the patient. Technologies exist to track user movements but they are often large, expensive, or require multiple units to be mounted on the user in different locations. Each of these can be barriers to patient adoption of rehabilitation technologies inside the home. We propose a single unit which incorporates an inertial measurement unit (IMU) and infrared sensors to determine orientation of the arm for various movements. The infrared sensors compensate for IMU drift errors, providing a sensor fusion solution. A novel optical wearable was created for detection of arm movement exercises in three-dimensional space that are consistent with stroke survivor exercises for spasticity rehabilitation. A study of five participants yielded high average accuracies of 98% across participants, without requiring any normalization of results to varying body sizes of participants. These findings indicate a strong inter-patient similarity in arm movement patterns. This inter-patient similarity implies the possibility of a transfer learning application, where various patient data can be used to collectively improve the accuracy of the predictive machine learning model. This could allow development of a medical device that is easily donned by the user for rehabilitation in the comfort of their own home, allowing more effective telerehabilitation.
Authors: Jordan Lui, Kevin Andrews, Andrea Ferrone, Lorenzo Colace, Carlo Menon Full Text
Keywords: Rehabilitation; stroke; spasticity; telerehabilitation; telerehab; rehabilitation devices; medical devices; wearable technology; biomedical engineering; machine learning; physical rehabilitation; wireless technology; optical sensing; sensor fusion; transfer learning
Abstract: Smart Cities including Smart Buildings are a fascinating fast developing research area. The present work describes an intelligent elevator, integrated in the context of a smart building. A Bayesian network approach was designed to drive decision actions of an elevator, according to the information that is provided by fuzzy rules and by cameras with image recognition software. The aim was to build a decision engine capable to control the elevators actions, in way that improves user’s satisfaction. Both sensitivity analysis and evaluation study of the implemented model, according to several scenarios, are presented. The final algorithm proved to exhibit the desired behavior, in 95% case of the scenarios tested.
Authors: Vasilios Zarikas, Nurislam Tursynbek Full Text
Keywords: Smart buildings; smart houses; elevator; Bayesian networks; intelligent decision making
Abstract: Improvement in resource consumption is among the many important targets that smart heating systems are aimed at achieving. Such a system automatically manipulates a household’s physical artefacts (such as radiators, heating boiler, etc.) and changes their operational regimes to achieve this goal. This system can formally be represented by the parameter optimization problem. Although substantial research in this area has already been conducted, there is room for improvement on a collective scale. We adapted the context-aware parameter optimization architecture for geographically distributed machines to integrate multiple-peer knowledge into local optimization. This approach is a novel because it redefines knowledge mining and interpretation functionality, and it employs clustering and machine learning algorithms. Current paper is an attempt to explore sensitivity of heating system’s local optimization to the mined knowledge, as it indicates if the method is applicable at all. A computational experiment confirms such sensitivity and provides basis for the future research.
Authors: Bolatzhan Kumalakov, Lyazzat Ashikbayeva Full Text
Keywords: Distributed knowledge management; context recognition; smart heating
Abstract: This paper explores the possibility of implementing a gesture/motion detection and recognition system to recognize the American Sign Language (ASL) protocol for communications and control. Gestures are captured using a Leap Motion sensing device with recognition based on a Support Vector Regression algorithm. There is a high correlation between the measured and predicted values in those samples; it is able to recognize those sample data with almost 100% accuracy. Development of network connectivity and establishment of communication protocol enables “smart” objects to collect and exchange data. With encouraging results given the limitations of the hardware, this work can be used to collect and exchange data with devices, sensors and information nodes and provide a new solution for Internet of Things in the future.
Authors: Rita Tse, AoXuan Li, Zachary Chui, Marcus Im Full Text
Keywords: Leap motion sensing device; American Sign Language detection and recognition; support vector regression; Internet of Things
Abstract: To make the best use of the vast potential opportunities that accompanies medical IoT sensor devices, security and privacy measures are expected to be inculcated as fundamental requirement within these systems. Although cryptography proves to be a promising solution for data breach problem, there is always an inconclusive debate between the usages of symmetric encryption schemes which provides reduced computation overhead and asymmetric encryption schemes that delivers better authentication in IoT devices. This research paper focuses on a brief analysis of various existing symmetric and asymmetric lightweight algorithms for IoT. Using simulation tests, we provide important analysis and considerations on practical feasibility of these cryptographic algorithms in IoTs built using sensor networks to help designers predict security performance under a set of constraints, we also propose a new methodology which blends the usage of both symmetric and asymmetric encryption scheme for enhanced data transfer in health care related IoT devices.
Authors: Revanesh M, V. Sridar Full Text
Keywords: Biomedical; Internet of Things; cryptography; symmetric encryption; asymmetric encryption
Abstract: Acoustic induced vibration can cause museum art objects to deteriorate at a very rapid pace, thereby endangering the cultural heritage they conserve. Here, wireless triaxial MEMS accelerometer nodes were used to monitor the effect of loud and live music played during social events at the Walters Art Museum on mock art objects. During the social event, continuous wireless vibration activity monitoring was performed from sensors placed on respective blocks with the art object under test to provide real-time vibration activity information. The vibration induced by the music, as well as the vibration propagation through the specific object via the mounting or display mechanism were evaluated. The loud music played generated high acoustic energy that excited the object vibration modes and placed the object at risk of induce fatigue cracking and “walking”, all of which can potentially cause catastrophic damage in extreme cases. The acoustic field with an intensity of 90 dB induced the highest level of vibration. The display block orientations were observed to contribute to the vibration activity, wherein, the vertical orientation induced higher levels of vibration when compared to the horizontal orientation.
Authors: Khalid Elawad, Federica Roth, Janaya Slaughter, Gymama Slaughter Full Text
Keywords: Acoustic; vibration; monitoring; wireless; sensors
Abstract: This study is a part of an ongoing work regarding the possible scenarios during autonomous driving and it takes into consideration, not only academic literature and industry updates, but also the aspects described on the standards already disclosed. As autonomous driving systems become a more tangible reality, the development of an efficient warning strategy, within the human-machine interaction (HMI), is paramount, for a range of reasons that include the trust in these emerging systems. It has been noted by several researchers that a particular moment of the semi-autonomous driving is of special interest, which is the driver’s role shift from passive monitoring of the vehicle to active control of the autonomous driving system. This study presents a cooperative approach to the vehicle-driver communication strategy, accounting for both human factors and complexity of the AD systems. A nexus diagram has been developed that in a comprehensive way, provides an alternative strategy to the conventional static warning strategy, able to be customized in some specific traits, which can later be resorted to by programmers for expeditiously implementing this much needed strategy in the real context of semi-autonomous driving.
Authors: Susana Costa, Paulo Simões, Nelson Costa, Pedro Arezes Full Text
Keywords: Warning; driver; autonomous; human-machine interaction (HMI); vehicle
Abstract: Smart cities are the current technological solutions to handle the challenges and complexity of the growing urban density. Traditionally, smart city resources management rely on cloud based solutions where sensors data are collected to provide a centralized and rich set of open data. The advantages of cloud-based frameworks are their ubiquity, as well as an (almost) unlimited resources capacity. However, accessing data from the cloud implies large network traffic, high latencies usually not appropriate for real-time or critical solutions, as well as higher security risks. Alternatively, fog computing emerges as a promising technology to absorb these inconveniences. It proposes the use of devices at the edge to provide closer computing facilities and, therefore, reducing network traffic, reducing latencies drastically while improving security. We have defined a new framework for data management in the context of a smart city through a global fog to cloud resources management architecture. This model has the advantages of both, fog and cloud technologies, as it allows reduced latencies for critical applications while being able to use the high computing capabilities of cloud technology. In this paper, we present the data acquisition block of our framework and discuss the advantages. As a first experiment, we estimate the network traffic in this model during data collection and compare it with a traditional real system.
Authors: Amir Sinaeepourfard, Jordi Garcia, Xavier Masip-Bruin, Eva Marin-Tordera Full Text
Keywords: Smart city; fog-to-cloud (F2C) computing; data management; data lifecycle model (DLC); data aggregation
Abstract: This paper presents the implementation of a cost-effective system to classify muscular intent. A neural network is used for this purpose. After skin preparation, feature extraction, network training and real-time testing, an average overall classification accuracy of 93.3% over three possible gestures was obtained. Ultimately, the results obtained speak to the suitability of an Arduino-based system for the acquisition and decoding of muscular intent. This result is indicative of the potential of the Arduino microcontroller in this application, to provide effective performance at a far lower price-point than its competition.
Authors: Anmol Khanna, Senthil Arumugam Muthukumaraswamy Full Text
Keywords: Electromyography; orthosis; prosthesis; neural network; muscular intent; Arduino; cost-effective
Abstract: We present technology that uses a computer-generated 3-D image inserted in real-time endoscopic view. The generated image represents the safe zone, the area in which the tip of the surgical tool should stay during operation. Movements of tools can be tracked using existing techniques of surgical tools navigation based on tracking fiduciary markers. We describe a spatial measurement system and explain the process of virtual zone creation, challenges related to the accuracy of the augmented reality system and calibration process that can be improved by machine learning techniques. We discuss possible future usage of this system in telesurgery both on the ground and in space.
Authors: Bojan Nokovic, Tian Zhang Full Text
Keywords: Augmented reality; image processing; endosurgery
Abstract: The monitoring and early detection of abnormalities in the cardiac cycle morphology have significant impact on the prevention of heart diseases and their associated complications. Electrocardiogram (ECG) is very effective in detecting irregularities of the heart muscle functionality. In this work, we investigate the detection of possible abnormalities in ECG signal and the identification of the corresponding heart disease in real-time using an efficient algorithm. The algorithm relies on cross-correlation theory to detect abnormalities in ECG signal. The algorithm incorporates two cross-correlations steps. The first step detects abnormality in a real-time ECG signal trace while the second step identifies the corresponding disease. The optimization of search-time is the main advantage of this algorithm
Authors: Soha Ahmed, Ali Hilal-Alnaqbi, Mohamed Al Hemairy, Mahmoud Al Ahmad Full Text
Keywords: Cross-correlation; abnormalities detection; ECG; cardiac cycle; eHealth; remote monitoring; algorithm
Abstract: Activity tracking has gained prominence with the phenomenal growth of fitness devices and ever increasing fitness awareness and step count has emerged as the primary fitness parameter. Step count is typically derived from continuous motion signal analysis. Either a dedicated hardware processing unit or a software signal processing unit, continuously counting repetition in motion signal is the common implementation choice. While dedicated hardware increases the BOM (Bill of Material), the software pedometer incurs high power consumption. In this paper, we proposed a power efficient step estimation algorithm without continuous high power requirement even in motion at an acceptable accuracy using smartphone’s tri-axial accelerometer data.
Authors: Dipankar Das, Vishal Bharti, Prakhyath Kumar Hegde, MoonBae Song Full Text
Keywords: Software pedometer; step count; step estimation; accelerometer; fitness device; smartphone
Abstract: Music has become an essential component during indoor and outdoor workouts. Studies show that a budding runner can find a significant increase in motivation simply by listening to music. In this paper, we propose a system to further enhance the ergogenic benefits of music by providing real-time musical guidance derived from variations in user’s physiological factors. This system helps user to increase the time spent in desired heart rate zone, thus, benefiting to reach user goal faster.
Authors: Prakhyath Kumar Hegde, Shoaib Sheriff, Shiva Murthy Busetty, Jinmook Lim Full Text
Keywords: Music; heart rate; workout zones; music guidance; ergogenic benefits
Abstract: Health inequality is a widely reported problem. There is an existing body of work that links health inequality and geographical location. This means that one might be more disadvantageous health-wise if one was born in one region compared to another. Existing health inequality related work in various developed and developing countries rely on population census or survey data. Effective conclusions drawn require large scale data with multiple parameters. There is a new phenomenon in countries (e.g. the UK), where governments are opening up citizen-centric data for transparency purposes and to facilitate data-informed policy making. There are many health organisations, including NHS and sister organisations (e.g. HSCIC), which participate in this drive to open up data. These health-related datasets can be exploited health inequality analytics. This work presents a novel approach of analysing health inequality in English regions solely based on open data. A methodological and systematic approach grounded in CRISP-DM methodology is adhered to for the analyses of the datasets. The analysis utilises a well-cited work on health inequality in children and the corresponding parameters such as Preterm birth, Low birth weight, Infant mortality, Excessive weight in children, Breastfeeding prevalence and Children in poverty. An authority in health datasets, called Public Health Outcomes (PHO) Framework, is chosen as a data source that contains data with these parameters. The analysis is carried out using various SAS data mining techniques such as clustering, and time series analysis. The results show the presence of health inequality in English regions. The work clearly identifies the English regions on the right and wrong side of the divide. The policy and future work recommendations based on these findings are articulated in this research. The work presented in this paper is novel as it applies SAS based BI techniques to analyse health inequality for children in the UK solely based on open data.
Authors: Neha Thakkar, Ah-Lian Kor, Sanela Lazarevski Full Text
Keywords: SAS; BI techniques; open health data; data mining; health inequality
Abstract: This paper addresses the realization of a Human/Machine (H/M) interface including a system for automatic recognition of the Continuous Pathological Speech (ARSCPS) and several communication tools in order to help frail people with speech problems (Dysarthric speech) to access services providing by new technologies of information and communication (TIC) while making it easier for the doctors to achieve a first diagnosis on the patient’s disease. In addition, an ARSCPS has been improved and developed for normal and pathology voice while establishing a link with our graphic interface which is based on the box tools Hidden Markov Model Toolkit (HTK), in addition to the Hidden Models of Markov (HMM). In our work we used different techniques of feature extraction for the speech recognition system in order to improve the dysarthric speech intelligibility while developing an ARSCPS which can perform well for pathological and normal speakers. These techniques are based on the coefficients of ETSI standard Mel Frequency Cepstral Coefficient Front End (ETSI MFCC FE V2.0); Perceptual Linear Prediction coefficients (PLP); Mel Frequency Cepstral Coefficients (MFCC) and the recently proposed Power Normalized Cepstral Coefficients (PNCC) have been used as a basis for comparison. In this context we used the Nemours database which contains 11 speakers that represents dysarthric speech and 11 speakers that represents normal speech.
Authors: Brahim-Fares Zaidi, Malika Boudraa, Sid-Ahmed Selouani, Djamel Addou Full Text
Keywords: Automatic Recognition System of Continuous Pathological Speech (ARSCPS); ETSI standard Mel frequency Cepstral Coefficient Front End (ETSI MFCC FE V2.0); Hidden Markov Model Toolkit (HTK); Hidden Models of Markov (HMM); Human/Machine (H/M); Technologies of Information and Communication (TIC); Mel Frequency Cepstral Coefficients (MFCC); Perceptual Linear Prediction (PLP); Power Normalized Cepstral Coefficients (PNCC)
Abstract: The article is dedicated to propose a system which can measure heart rate, blood glucose and oxygen saturation ratio non-invasively with maximum possible accuracy. The design is easy to use, real time and pain free. Preliminary results are acquired on a prototype amplification and filter circuitry, a sensor consisting of two LEDs red (660 nm) and near infra-red (940 nm) as transmission spectrums and an Arduino controller board. Fingertip photoplethysmographic (PPG) signal analysis is done. Furthermore, results have been compared to commercially available pulse oximeter and measurement accuracy of + 3% for pulse rate and + 1% for oxygen saturation is observed. In non-invasive glucose measurement, accuracy plays a vital role. Hence, a 3 day clinical trial is conducted in a hospital for various diabetic and non-diabetic test specimens of different ages. Total 132 specimens were analyzed during the trial period and results are compared with the Beckman coulter AU-480 chemistry analyzer present in the pathology laboratory of the hospital. Clarke Error Grid (CEG) analysis depicts 94.70% of the readings (of all 3 days) fall in the clinically accepted zone A. The Absolute Relative Differences (ARDs) yield mean and median error values of 9.51% and 8.05%, respectively.
Authors: Nazo Haroon, Mohsin Islam Tiwana Full Text
Keywords: Pulse oximeter; photoplethysmography; absolute relative difference; non-invasive; chemistry analyzer
Abstract: To improve public health care outcomes with reduced cost, this research proposed a framework which focuses on the positive and negative symptoms of illnesses and the side effects of treatments. However, previous studies have been limited as they neither identified influential users nor discussed how to model forms of relationships that affect network dynamics and determine the accurate ranking of certain end user’s feedbacks. In this research, a two-step analysis framework is proposed as the system. In the first level, the system utilized exploratory analysis and clustered users and their useful feedbacks through self-organizing maps (SOM). In the second level, the system developed three lists of negative and positive feedbacks and treatment symptoms caused by implanting the SOM that considered accurate ranking by calculating the frequency of each term of interests. The feasibility of the proposed solution is confirmed as performance evaluations of the system in terms of computational costs. The results showed that these solutions are reasonable computational costs relative to memory and processor usage.
Authors: Mohammed Saeed Jawad, Wael Adi, Afaf Salem, Mohamed Doiher Full Text
Keywords: Data mining; social media; medical data; end user feedbacks; positive terms; negative terms; symptoms
Abstract: Music has various effects on the brain and body. Multiple studies have suggested a marked difference in information processing between musicians and non-musicians when listening to music. However, the occurrence of these changes within the brain is yet to be instantiated. The amount of data obtained to study information processing in the brain is huge and surplus features obtained may be redundant and there is thus an increased need to reduce amount of data. In this study information processing in the brain is determined by obtaining optimum features using data reduction processes. Features obtained are compared with brain activation in an attempt to predict behavioral data. Twenty healthy subjects were considered of which 10 were musicians and 10 were non-musicians. All subjects were made to listen to a music stimulus that was played. Electroencephalography was used to record responses and behavioral data too was obtained. The Major predictors for musicians were frontal and temporal lobe electrodes and this was absent in non-musicians. Although some electrodes have high node strengths, they were not indicated as predictors. The enhanced inter and intra hemispheric functional connectivity was seen for the musicians which was due to familiarity and music learning.
Authors: Lavanya Krishna, B. Geethanjali, Chandramouli Ramesh, Mahesh Veezhinathan Full Text
Keywords: Information processing; data reduction; prediction; brain activation; familiarity
Abstract: New trends in software engineering are reshaping the computing landscape – computation is increasingly portable, storage is increasingly elastic, and data accessibility is increasingly “always on” and “always available” to an exponentially increasing variety of applications and devices. While the effects of these trends in the larger “compute-verse” are profound, this paper will discuss and consider how these trends are affecting specifically healthcare informatics. Indeed, end users will experience this trend in applications that are web-centric and mobile-friendly. Such apps will be increasingly used as gateways to powerful backend services (such as analytics and deep learning), while offering local client-side specialization (rich, immersive visual-izations and collaborations). The paper offers some perspectives and presents some unmet needs in medical informatics and seeks to provide a viewpoint into how the “next wave” of computing might present itself. In particular the paper presents a web-based medical image data and information management software platform called CHIPS (Cloud Healthcare Image Processing Service). This cloud-based service uniquely provides an end-to-end service that can connect data from deep within a Hospital securely to the cloud and allow for powerful collaboration – both on medical image data but also on image processing pipelines, allow for complex processing and enable computational research, and provide a vision of decentralized, large-scale data analysis that can fuel Big Data on medical bioinformatics.
Authors: Rudolph Pienaar, Jorge Bernal, Nicolas Rannou, P. Ellen Grant, Daniel Hahn, Ata Turk, Orran Krieger Full Text
Keywords: Web based neuroimaging; big data; applied con-tainerization; telemedicine; cloud-storage
Abstract: There are different three-dimensional (3D) imaging systems which allow obtaining measurements from surface geometries. However, for medical applications, such methods need to be safe, with no harm or radiation. Therefore this paper proposes a methodology for obtaining facial surfaces in order to be compared over time for orthodontic applications. Specifically, it was employed a 3D laser scanner for facial geometry acquisition and a dedicated software for performing a rigid surface registration among three different times: (T1) before the treatment, (T2) 15 days after wearing the orthodontic appliance and (T3) three months after ending the treatment. It was observed a change in the face, especially in the region of the nostrils (1.47 mm) and maxilla (1.78 mm). The use of a 3D scanner for facial scanning is able to measure small slightly volumetric changes caused by wearing a palatal expander over time, producing significant facial changes, which will provide social and health impacts.
Authors: Mauren Abreu de Souza, Cristiane Schmitz, Melissa Galarza Rodrigues, Giovanna Simião Ferreira, Elisa Souza Camargo, Percy Nohama Full Text
Keywords: Three-dimensional measurements; laser scanning system; facial geometry; orthodontic applications
Abstract: Alzheimer disease (AD) is one of the most common form of dementia. Accurate detection of AD and its initial stage i.e., mild cognitive impairment (MCI) is a challenging task. In this study, a computer-aided diagnosis (CAD) system is implemented on clinical and diagnostic imaging data from OASIS database. Amygdala and hippocampus are the regions that are most af-fected by Alzheimer and are located inside the grey matter region of brain. Features used for classification are calculated using grey level co-occurrence matrix (GLCM) such as entropy, energy, homogeneity, and correlation. The ratios of the grey matter and white matter volume to the cerebrospinal fluid volume are also used. Clinical features are also used improving the classification accuracy achieving 94.6% for binary classification. The proposed algorithm is also used for multi-class classification where three classes, namely, normal (N), Alzheimer disease (AD), and mild cognitive impairment (MCI) are considered. An accuracy of 79.8% on these classes is achieved that is significant since the classes considered are highly similar. We have achieved improved results in comparison to state-of-the-art techniques for binary classification and have also performed multi-class classification.
Authors: Tooba Altaf, Syed Muhammad Anwar, Nadia Gul, Muhammad Majid, Muhammad Nadeem Majeed Full Text
Keywords: Alzheimer; hybrid features; classification; multi-class
Abstract: Slicer has been used in medical community for several years now. This paper describes a Python extension imported in Slicer application. The main contribution of the paper is to outline and explain how to import and test a Python extension which we created to isolate gray matter and bone in MR brain volume images. Our future plans include both qualitative and quantitative analysis, validation, and comparison to other similar techniques, and extensions to 3D surface extractions and interpretations using Slicer.
Authors: Ashley Whiteside, Sudhanshu Kumar Semwal Full Text
Keywords: 3D slicer; medical visualization; thresholding
Abstract: The detection of anomalies in large crowd is a cognitive task. A proactive approach is required to effectively manage the crowd flow and to accurately detect the erratic behavior of crowd. In this paper, we present an algorithm which observes crowd optical flow in real time and detect any abnormal events in crowds automatically. The system takes the frames at regular intervals through a video camera and processes these frames using image processing techniques. The proposed system further uses certain rules to classify the normal or abnormal activities of crowd. We propose a novel motion vector based technique to detect behavior of the cluster of interest. The features of the motion vectors are analyzed to characterize the crowd behavior. The evaluation of the system is performed using different videos having different crowd behaviors and the results on simulated crowds demonstrate the effectiveness of the proposed system.
Authors: Amna Sajid, Sajid Gul Khawaja, Mehak Tofiq Full Text
Keywords: Motion vector; crowd behavior; real time processing
Abstract: Domotics is the study that involves the fusion of the words “domus” (which means home in latin) and “robotics”, linked directly to the act of automating something. Using facial recognition, by detecting the blink of eyes and its movements, we can make it possible to perform various domestic tasks, such as turning on and off a lamp, opening windows and other activities. In this scenario, an application was developed for the Android platform that connects to the Particle Photon micro controller that is focused on the Internet of Things and is also responsible for the modularization of Wi-Fi functioning as embedded communication channel, making it possible to control the functions that triggered the routine tasks of the residence. In order for image processing to take place, we adopted an Open-Source library called OpenCV. The application contains the Accessibility Mode that is selected by default and enables the blink recognition functions that can be used by all people, in addition to the traditional way of touch screen.
Authors: Adriano P. Nanes, Clauber C. Souza, Diego Y. Miamoto, Murilo O. Lima, Paulo H. C. Silva, Adriane P. Colossetti Full Text
Keywords: Domotics; image processing; computer vision; artificial intelligence
Abstract: The artistic style of a painting can be sensed by the average observer, but algorithmically classifying the artistic style of an artwork is a difficult problem. The recently introduced neural-style algorithm uses features constructed from the low-level activations of a pretrained convolutional neural network to merge the artistic style of one image or set of images with the content of another. This paper investigates the effectiveness of various representations based on the neural style algorithm for use in algorithmically classifying the artistic style of paintings. This approach is compared with other neural network based approaches to artistic style classification. Results that are com-petitive with other recent work on this challenging problem are obtained.
Authors: Jeremiah W. Johnson Full Text
Keywords: Artificial intelligence; neural network; style trans-fer; deep learning; computer vision; machine learning
Abstract: It is useful to simulate disaster situations by recon-structing actual buildings in a virtual space to enable people using the buildings to learn how to act in a disaster situation before it occurs. Therefore, we are developing a disaster-simulation system that simulates various disaster situations by virtually reproducing the situation inside buildings to allow individuals to experience disaster situations by using the latest virtual reality (VR) system. We use a mobile robot equipped with multiple laser-range sensors that measure the distance to objects in a building and an RGB-depth camera to collect distance and image data while the robot automatically travels along a route suitable for 3D measurement. We also manually scan physical objects individually by using a handheld 3D sensor. We then arrange the objects in a 3D map and manipulate them. We have also developed a VR system called “Building-Scale VR” that consists of indoor 3D maps filled with manipulable virtual objects that we call “operation targets” and a VR headset capable of position tracking within the building. In this paper, we explain how to implement Building-Scale VR and its applications to disaster simulations. It is useful to express disaster situations by reconstructing actual buildings into virtual space and enable users in the building to experience such situations beforehand to learn how to properly act during a disaster.
Authors: Katashi Nagao, Yusuke Miyakawa Full Text
Keywords: Virtual reality; 3D map; autonomous mobile robot; disaster simulation
Abstract: In the realm of wearable augmented reality (AR) systems, stereoscopic video see-through displays raise issues related to the user’s perception of the three-dimensional space. This paper seeks to put forward few considerations regarding the perceptual artefacts common to standard stereoscopic video see-through displays with fixed camera convergence. Among the possible perceptual artefacts, the most significant one relates to diplopia arising from reduced stereo overlaps and too large screen disparities. Two state-of-the-art solutions are reviewed. The first one suggests a dynamic change, via software, of the virtual camera convergence, whereas the second one suggests a matched hardware/software solution based on a series of predefined focus/vergence configurations. Potentialities and limits of both the solutions are outlined so as to provide the AR community a yardstick for developing new stereoscopic video see-through systems suitable for different working distances.
Authors: Fabrizio Cutolo, Vincenzo Ferrari Full Text
Keywords: Augmented reality and visualization; stereoscopic display; stereo overlap; video see-through
Abstract: Random fields have remained a topic of great inter-est over past decades for the purpose of structured inference, es-pecially for problems such as image segmentation. The local nodal interactions commonly used in such models often suffer the short-boundary bias problem, which are tackled primarily through the incorporation of long-range nodal interactions. However, the issue of computational tractability becomes a significant issue when incorporating such long-range nodal interactions, particularly when a large number of long-range nodal interactions (e.g., fully-connected random fields) are modeled. In this work, we introduce a generalized random field framework based around the concept of stochastic cliques, which addresses the issue of computational tractability when using fully-connected random fields by stochas-tically forming a sparse representation of the random field. The proposed framework allows for efficient structured inference using fully-connected random fields without any restrictions on the potential functions that can be utilized. Several realizations of the proposed framework using graph cuts are presented and evaluated, and experimental results demonstrate that the proposed framework can provide competitive performance for the purpose of image segmentation when compared to existing fully-connected and principled deep random field frameworks.
Authors: Mohammad Javad Shafiee, Alexander Wong, Paul Fieguth Full Text
Keywords: Fully connected random field; random graph; stochastic cliques; graph cuts; Markov random fields
Abstract: A vehicle detection algorithm developed by Surendra (called Surendra algorithm) is composed of three parts: segmentation, adaptive background updating and background extraction. Surendra algorithm is sensitive to dynamic environment and is easily influenced by noise and illumination when detecting moving objects. For this reason, we present an improved Surendra algorithm (called Surendra_αInst algorithm), in which frame-difference method is applied to calculate the motion mask by replacing the method of using the Boolean AND operator between two binary images which is derived by two adjacent frames subtracted the background and thresholded to a binary image, respectively. Then the motion mask is employed to calculate the instantaneous background. At the same time, according to the change rate of background pixels, the background update coefficient is calculated to obtain a stable background image. Experiments results on five different types of image sequences showing that our Surendra_αInst algorithm, compared with Surendra algorithm, Surendra_AvgInit algorithm and Surendra_α algorithm, has higher DR and lower FAR, and the moving object detected is more integrated.
Authors: Fang Dai, Dan Yang, Tong Dang Full Text
Keywords: Moving object detection; Surendra algorithm; background update coefficient; motion mask
Abstract: In common multiclass classification problem, the main difficulties occur when classes are not mutually exclusive. In order to solve problems such as document classification, medical diagnosis or scene classifications we need to use robust and reliable tools. In this paper, we consider the problem of scene classification treated by hidden Markov models (HMMs) using a novel and intuitive classification process. We introduce a modeling system that scales the parameters of the HMM (observations and hidden states) into the variables of the scene classification problem (scene categories and objects belonging to the scene). The HMM is constructed with the support of object’s weight ranking functions. Inference algorithms are developed to extract the most suitable scene category from the generated discrete Markov chain. In order to approve the efficiency of the proposed method, we used the MIT Indoor dataset (2700 scenes distributed into 67 scenes categories) to evaluate the classification accuracy. We also compared the obtained results with the current state of the art’s methods. Our approach distinguishes itself by obtaining results going until 76% of well classified scenes.
Authors: Benrais Lamine, Baha Nadia Full Text
Keywords: Scene classification; object’s weight; hidden Markov models
Abstract: In this paper, the Canny and Sobel operators are combined to scale an image and to produce an image with high resolution and clear edges. The Canny operator is used at first to detect edges of the objects inside the original image. After that, four Sobel operators in different directions are applied to detect the edge directions. The direction of the edge at one edge point is determined by comparing the first derivatives of the intensity at edge pixels. The image interpolation is carried out adaptively according to the determined edge direction. For the homogeneous areas, the bilinear interpolation is applied. After image scaling, one method to suppress the zig-zag noises is applied to enhance the output image. The first and second steps of such zig-zag suppression are the same with the proposed image scaling algorithm. Then the pixels around the edge pixels are specially modified. The experimental results show that our proposed algorithm could produce scaled images with high resolution and well-preserved edges.
Authors: Wanli Chen, Hongjian Shi Full Text
Keywords: Image scaling; edge detection; Sobel operator; Canny operator; bilinear interpolation; bicubic interpolation
Abstract: In modern society, autonomous quadrotors can be used to perform tasks and collect data in dangerous and inacces-sible environments where human involvement would traditionally be necessary. Unmanned Aerial Vehicles (UAVs), and especially the quadrotor, are still facing obstacles in terms of following a trajectory and flying autonomously in enclosed, complex or GPS denied areas. This paper focuses on presenting the literature on the quadrotor’s ability to follow a terrain. It starts with the cur-rent research frame work and its advantages and disadvantages. Next, a new research frame work is proposed. The new method develops a novel navigation framework which would allow the UAV to autonomously follow unknown terrain while maintaining a certain distance from it, within environmental and energy consumption restraints. The proposed method involves connecting a single-beam LiDAR sensor to the base of the quadrotor in order to retrieve reliable and detailed information about the undulations of the terrain ahead. The sensor then feeds this information back to the quadrotor so that its controller can create a suitable trajectory and ensure a smooth flight-path.
Authors: Nasser Ayidh AlQahtani, Homayoun Najjaran Full Text
Keywords: Quadrotor; Terrain Following; GPS Denied Envi-ronment
Abstract: We describe here a prototyping of intelligent personal robot named BoBi secretary. When it is closed, BoBi is a rectangular box with a smart phone size. Owner can call to BoBi to open to transform from the box to a movable robot, and then it will perform many functions like humans such as moving, talking, emoting, singing, dancing, conversing with people to make people happy, enhance people’s lives, facilitate relationships, have fun with people, connect people with the outside world and assist and support people as an intelligent personal assistant. We consider BoBi is a treasure and so call the box moonlight box that is “月光宝盒” in Chinese. BoBi speaks with people, tells jokes, sings and dances for people, understands the owner and recognizes people’s voices. It can do all works which a secretary is doing including scheduling of works, schedule reminders, sending emails, calling phones, booking, making reservations, searching information, etc. BoBi has three main functions: intelligent meeting recording, multilingual interpretation and reading papers. BoBi is a portable, transformable, movable and intelligent robot.
Authors: Jiansheng Liu, Bilan Zhu Full Text
Keywords: Intelligent robot system; personal assistant robot; portable robot; transformable robot; movable robot
Abstract: Moving heavy and over-sized loads poses significant control challenges. A single crawler crane may be insufficient for such lifting tasks if the payload exceeds the capacity, or if the payload’s size and shape make it difficult to secure it to a single crane hook. To solve these problems, it may be necessary to manipulate such items by tandem lifting with two cranes. These cranes are usually driven by two operators whose actions are coordinated by a lift director. In this paper, a pseudo-dynamic model describing the behavior of such a system, when the bases of the cranes are moving in a straight line, is derived. The paper also sets basic guidelines that prevent tip-over accidents. Finally, it presents a control system that eliminates the need for a second crane operator by making one crane mimic the behavior of the other crane, thus reducing the possibility of human errors.
Authors: Sima Rishmawi, William Singhose Full Text
Keywords: Crawler; crane; tandem; robotic; tip-over stability; control; crane safety
Abstract: Human behaviors are a complex and challenging task to learn from daily life activities. Persons who are dependent can be ignored by society. Besides infants, elders are observed to have more accident rates in performing daily life activities. Alzheimer disease is the common impairment that leads to dementia in elderly people. Thus, elderly people are unable to live independent life due to forgetfulness. Continuous care and monitoring is required for Alzheimer’s patients to live a healthy life, as it generally becomes difficult for the people who suffer from this progressive disease to live an independent life. To support elderly people who desire to live an independent life, performing their daily activities smoothly and need a home safety is to find out daily life activities of elderly people and provide them appropriate aid. The heuristic approach has been developed to recognize human behavior and intentions with the help of sensor events. The smart environment is created for monitoring volunteers conducting activities of daily life. This research aims to develop machine learning algorithms for identifying person’s daily activities. The model proposed is flexible, adaptable, and scalable well with data.
Authors: Rida Ghafoor Hussain, Muhammad Awais Azam, Mustansar Ali Ghazanfar, Usman Naeem, Christian Meurisch Full Text
Keywords: Bayes Net; Naïve Bayes; dynamic; ADL; classifiers
Abstract: This study reviews expediency and discusses about self-organizing map (SOM) in financial performance management and analysis of a Canadian bank, and financial data of Royal Bank of Canada (RBC) has been analysed with SOM application. The SOM is used widely for many financial applications, including financial institutions towards financial crisis of 2008. Where it is an automatic data-analysis technique, which is mostly applied to visualization and clustering in data exploration. The objective of this study is to evaluate financial performance and to comprehend the influences of International Financial Reporting Standards (IFRS) after the convergence in year 2011. Effects of Management Decision and Analysis (MD&A) on financial performance were also analyzed from reported financial data gathered through financial statements of RBC. SOM Ward clustering for visualization facilitates in assessing the fundamental nature based on a set of maps and clustering attribute solutions for measuring financial performances. Result of this study indicates that SOM must be a practicable application designed for financial performance analysis and measures for many financial sectors.
Authors: Manchuna Shanmuganathan Full Text
Keywords: Self-organizing map (SOM); financial performance; International Financial Reporting Standards (IFRS); Canadian Generally Accepted Accounting Principles (CGAAP); Management Decision and Analysis (MD&A)
Abstract: In the modern era, each Internet user leaves enor-mous amounts of auxiliary digital residuals (footprints) by using a variety of on-line services. All this data is already collected and stored for many years. In recent works, it was demonstrated that it is possible to apply simple machine learning methods to analyze collected digital footprints and to create psycho-demographic profiles of individuals. However, while these works clearly demonstrated the applicability of machine learning meth-ods for such an analysis, created simple prediction models still lacks accuracy necessary to be successfully applied for practical needs. We have assumed that using advanced deep machine learning methods may considerably increase the accuracy of predictions. We started with simple machine learning methods to estimate basic prediction performance and moved further by applying advanced methods based on shallow and deep neural networks. Then we compared prediction power of studied models and made conclusions about its performance. Finally, we made hypotheses how prediction accuracy can be further improved. As result of this work, we provide full source code used in the experiments for all interested researchers and practitioners in corresponding GitHub repository. We believe that applying deep machine learning for psycho-demographic profiling may have an enormous impact on the society (for good or worse) and provides means for Artificial Intelligence (AI) systems to better understand humans by creating their psychological profiles. Thus AI agents may achieve the human-like ability to participate in conversation (communication) flow by anticipating human opponents’ reactions, expectations, and behavior. By providing full source code of our research we hope to intensify further research in the area by the wider circle of scholars.
Authors: Iaroslav Omelianenko Full Text
Keywords: Deep machine learning; O.C.E.A.N. personality model; psycho-demographic profiling; TensorFlow; R programming language
Abstract: There are some limitations in expressing the order of actions using Event-B. To solve this problem, the event refinement structure method (ERS) is proposed to facilitate modeling of the system’s control flow. However, the event refinement structure cannot be translated to a behavior semantic model such as the communication sequence process (CSP) or labeled transition system (LTS) directly, thus it is not convenient for engineers to verify the control flow. In this paper, we first propose a general method to model the control flow of the Event-B model with various iUML-B state machines. Then we prove by simulation that the event trace of the iUML-B state machine is the same as that of the event refinement structure method. Finally, we use a case study of a lift control system to prove the practicality of our method.
Authors: Han Peng, Chenglie Du, Haobin Wang Full Text
Keywords: Event-B; control flow modeling; iUML-B state machine; atomicity decomposition; event refinement structure
Abstract: A promising paradigm for achieving highly efficient deep neural networks is the idea of evolutionary deep intelligence, which mimics biological evolution processes to progressively synthesize more efficient networks. A crucial design factor in evolutionary deep intelligence is the genetic encoding scheme used to simulate heredity and determine the architectures of offspring networks. In this study, we take a deeper look at the notion of synaptic cluster-driven evolution of deep neural networks which guides the evolution process towards the formation of a highly sparse set of synaptic clusters in offspring networks. Utilizing a synaptic cluster-driven genetic encoding, the proba-bilistic encoding of synaptic traits considers not only individual synaptic properties but also inter-synaptic relationships within a deep neural network. This process results in highly sparse offspring networks which are particularly tailored for parallel computational devices such as GPUs and deep neural network accelerator chips. Comprehensive experimental results using four well-known deep neural network architectures (LeNet-5, AlexNet, ResNet-56, and DetectNet) on two different tasks (object categorization and object detection) demonstrate the efficiency of the proposed method. Cluster-driven genetic encoding scheme syn-thesizes networks that can achieve state-of-the-art performance with significantly smaller number of synapses than that of the original ancestor network. (∼125-fold decrease in synapses for MNIST). Furthermore, the improved cluster efficiency in the generated offspring networks (∼9.71-fold decrease in clusters for MNIST and a ∼8.16-fold decrease in clusters for KITTI) is particularly useful for accelerated performance on parallel computing hardware architectures such as those in GPUs and deep neural network accelerator chips.
Authors: Mohammad Javad Shafiee, Elnaz Barshan, Alexander Wong Full Text
Keywords: EvoNet; deep learning; evolution; deep neural net-work; embedded systems
Abstract: University lecture hall is the most crowded place in the university that concedes by pedestrian which mostly are the students. The university students have their own daily schedules that require them to move from one place to another in a shortest time. However, the unbalance and scattered important places (lecture and tutorial halls, general lab, students’ center and etc.) had caused the unbalance used of university lecture hall’s exit and the population density in the lecture hall. Hence, during panic situation, the evacuation process will lead towards the high physical contact between the pedestrian due to the heavy usage of exits and caused the crowd bottleneck. This research is to study and simulate the pedestrian movement in the university lecture hall for determining the most used exit and the reasons for the mass usage happened. This simulation had used the cellular automata approach as the discrete model for the microscopic movement of the pedestrian. At the end of this research, the university will be proposed with some solutions to overcome this situation. The building design and construction planning was pointed for future enhancement towards the sustainable and prudent learning space for the university’s students.
Authors: Najihah Ibrahim, Nur Shazreen Nabiha Mat Tan Salleh, Fadratul Hafinaz Hassan Full Text
Keywords: Cellular automata approach; microscopic movement; panic situation; normal situation; university lecture hall, pedestrian flow rate
Abstract: Power System Stabilizers (PSSs) are supplementary controllers to enhance the damping of electromechanical oscil-lations in synchronous generators. A fractional order supple-mentary controller, which has the features as broad bandwidth, memory effect, and flatness in phase contribution, is proposed in this paper. The fractional parameter effect made the stabilizer to perform well against the wide range of disturbance uncertainties in the power system. The Fractional order PSS parameter tuning problem is modified as an optimization problem that is solved using Bacteria Foraging Algorithm (BFA) in a multimachine environment. Since BFA is highly efficient optimization technique with fast global search property and convergence, it is popular in the field of power system application domains for solving real-world optimization problems. The robustness of the proposed BFA based fractional order PSS (BFA-FoPSS) is verified in a multi-machine power system under the wide range of operating conditions and by introducing faults of different size at different locations. The efficiency of the proposed BFA-FoPSS is demon-strated through time domain simulations, eigenvalue analysis and with a performance index. Also, the results are compared with PSO based Conventional PSS (PSO-CPSS), and PSO based FoPSS (PSO-FoPSS) to establish the fractional parameter effect on the improvement of system dynamic response and the relevance of the proposed PSS for extending the dynamic stability limit of the system under various loading and generating conditions.
Authors: Haseena K A, Jeevamma Jacob, Abraham T Mathew Full Text
Keywords: Stability of synchronous machines; robust control; power system stabilizer; fractional order control; bacteria foraging algorithm
Abstract: This work presents the results of applying an advanced performance monitoring technique to centrifugal com-pressor system using deep recurrent neural network (DRNN). In reality, due to different kind of disturbances, the compressor system may lead to catastrophic situations. Therefore, perfor-mance monitoring has become an issue of primary importance in modern process engineering automation. Detecting anomalies in such scenarios become challenging using standard statistical approaches. In this article, we discuss a Long Short Term Memory (LSTM) based DRNN technique to predict the faulty behavior of the compressor system. Due to the ability of LSTM to maintain memory, these networks have been proven effective for learning patterns of the time series data with unknown length. This motivates us to propose a performance monitoring schema based on LSTM-DRNN. To validate the proposed approach, we have simulated the compressor model in Simulink and trained the LSTM-DRNN model on the obtained time series data of the compressor system that is running under ideal conditions. Further, the trained network have been used to detect anomalies in the time series data that was generated by introducing disturbance as an inlet temperature changes.
Authors: Harsh Purohit, Karmvir Phogat, P.S.V. Nataraj Full Text
Keywords: Performance monitoring; LSTM-DRNN; anomaly detection; compressor control system
Abstract: This paper addresses the convergence issues of the H∞ control algorithm by a matrix modulation technique on the mathematical generalized plant model of a system. This paper further presents a general solution to the robust control algorithm convergence problem of MIMO systems. The proposed controller is optimized for output error regulation by comparing the outputs of a higher order MIMO system to that of a slightly underdamped second order plant. The matrix modulation approach considers two singularities, viz., 1) control singularity; and 2) sensor singularity. The corresponding controller is tested on a laboratory model of helicopter, known as Twin Rotor MIMO System (TRMS) for its take off and hovering.
Authors: Parthish Kumar Paul, Jeevamma Jacob Full Text
Keywords: Matrix modulation; Twin Rotor MIMO System (TRMS); singularity; generalized plant; H∞ control; output error regulation
Abstract: Chaos in dynamical systems is still considered to be a somewhat curious, and generally undesirable property of non-linear systems. Despite the plethora of chaotic control methods published over the last decades, only in a few instances has the control of chaos been used to address real world problems in engineering or medicine. This is partly due to the limits of the used control methods, which either require specific analytical knowledge of the system, or the system needs to have specific characteristics to be able to be controllable. The lack of solutions for engineering and biomedical problems may also be due to specific requirements that prevent the implementation of control methods and the, as yet unproven, benefits that controlled chaos may bring to these problems. The aim of a practical application of chaos control is to fully control chaos in theoretical problems first, and then show applicable solutions to physical problems of stability and control. This controlled chaotic state should then have clear and distinct dynamic advantages over uncontrolled chaos and steady state systems. The application of the Rate Control of Chaos (RCC) method, which is derived from metabolic control processes, has already been shown to be effective in controlling several engineering problems. RCC allows non-linear systems to be stabilised into controlled oscillations, even across bifurcations, and it also allows the system to operate in regions of the parameter space that are inaccessible without this method of control. For fun, I will show that RCC controls the N-Body problem; for profit, it can control a bioreactor model to greatly improve yield. The RCC method promises to, finally, permit the control of complex dynamic systems.
Authors: Tjeerd V. olde Scheper Full Text
Keywords: Chaos; control of chaos; bio-inspired computing
Abstract: Support vector machines (SVMs) are an important tool in modern data analysis. Traditionally, support vector ma-chines have been fitted via quadratic programming, either using purpose-built or off-the-shelf algorithms. An alternative approach to SVM fitting is presented via the majorization–minimization (MM) paradigm. Algorithms that are derived via MM algorithm constructions can be shown to monotonically decrease their objectives at each iteration, as well as be globally convergent to stationary points. Constructions of iteratively-reweighted least-squares (IRLS) algorithms, via the MM paradigm, for SVM risk minimization problems involving the hinge, least-square, squared-hinge, and logistic losses, and 1-norm, 2-norm, and elastic net penalizations are presented. Successful implementations of the algorithms are demonstrated via some numerical examples.
Authors: Hien D. Nguyen, Geoffrey J. McLachlan Full Text
Keywords: Iteratively-reweighted least-squares; support vector machines; majorization–minimization algorithm
Abstract: Air traffic controllers (ATC) face daily major concerns which are controlled Airspace (CAS) infringements. An infringement is when a general aviation (GA) aircraft penetrates CASs without an advanced clearance from the ATC. These infringements could cause a mid air collision with authorized aircraft inside CAS which their conflict was not resolved ahead of time. It also disrupts ATCs operations by creating additional workload and revising new manoeuvre tactics. In the last two papers, they were focused on predicting future locations and find their probability of infringements to alert ATC in advance. So far, they have been dealing when one aircraft approaching CAS, in this paper however, focuses on the scenario when multiple aircraft infringe CAS, in case the ATC did not react quickly enough. As of 2020, it is mandatory for all GA to be equipped with a transponder which sends information such as flight ID, exact location and altitude. Therefore, using this assumption we are investigating a possible model which alerts and directs multiple GA out of CAS without interfering with commercial traffic. Kinetic tri-angulation method will be used as an automated manoeuvring tactic, leaving the ATC focusing on only to direct commercial flights.
Authors: Yousra Almathami, Reda Ammar Full Text
Keywords: Switching Kalman filters; controlled airspace; aircraft infringements; ground based safety system; polygon triangulation
Abstract: Retaining customer and finding the loyalty of an existing customer is an important aspect of today’s business industry. In this paper the study of behavior of different machine learning rules on Hopfield Nets is conducted. This is a work in continuation w.r.t the classification of a real customer dataset into four different classes of Super Premium Loyal Customer (SPL), Premium Loyal Customer (PL), Valued Customer (VC), Normal Customer (NC). This model enhances the approach of finding the loyalty of customer using Hebbian learning and Storkey learning of Hopfield Neural Network (HNN). HNN is reported to give good accuracy with image datasets but with some data preprocessing on customer dataset, it is showing very reasonable accuracy of around 85%. The proposed framework is tested on Breast cancer dataset also and results are tabulated in the paper.
Authors: Pooja Agarwal, Abhijit J. Thophilus, Arti Arya, Suryaprasad Jayadevappa Full Text
Keywords: Customer loyalty; Hopfield Neural Networks; learn-ing rate; momentum; Storkey learning; Hebbian learning; Softmax activation function
Abstract: Unmanned Aerial Vehicles (UAVs) have been increasingly used in military and civilian applications. Even though the UAV routing problems have similarities with Vehicle Routing Problems, there are still many problems where effective and efficient solutions are lacking. We propose a randomized heuristic algorithm for the cyclic routing of a single UAV. The UAV is required to visit a set of target areas where the time interval between consecutive visits to each area cannot exceed its relative deadline. The PSPACE-complete problem has a solution whose length may be exponential. Our algorithm tries to compute a feasible cyclic route while trying to keep short the cycle time. Our tests of 57 instances of the problem show that the algorithm has good effectiveness and efficiency.
Authors: Cheng Siang Lim, Shell Ying Huang Full Text
Keywords: Single Unmanned Aerial Vehicle (UAV); cyclic routing; randomization; heuristic
Abstract: Increasingly, our everyday environments become more and more connected and “smart”. Intelligent Interactive Systems (IIS) is an umbrella term to describe environments that are characterized by their ability to process data and generate responsive behavior using sensors, actuators, and microprocessors. Sentient Architecture generates an artful, imaginative, and engaging environment in which we can experiment with and observe human behavior and capabilities when confronted with IIS. This paper outlines a user study to test the value of a 3D augmented reality visualization which shows data flows and burst of activity in a Sentient Architecture sculpture named “Sentient Veil” in Boston, MA. Hence, our visualization is fittingly titled Lifting the Veil.
Authors: Andreas Bueckle, Katy Borner, Philip Beesley, Matthew Spremulli Full Text
Keywords: Intelligent interactive systems; engineering; information visualization; microprocessor; 3D; Internet of Things; mobile applications
Abstract: In nature or societies, the power-law is present ubiq-uitously, and then it is important to investigate the characteristics of power-laws in the recent era of big data. In this paper we prove the superposition of non-identical stochastic processes with power-laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws such that the sums of the logarithmic return of non-identical stock price fluctuations follow stable distributions.
Authors: Masaru Shintani, Ken Umeno Full Text
Keywords: Power-law; big data; limit distribution
Abstract: In this research, we propose to store equi-join relationships of tuples on inexpensive and space abundant devices, such as disks, to facilitate query processing. The equi-join relationships are captured, grouped, and stored as various tables on disks, which are collectively called the Join Core. Queries involving arbitrary legitimate sequences of equi-joins, semi-joins, outer-joins, anti-joins, unions, differences, and intersections can all be answered quickly by merely merging these tables without having to perform joins. The Join Core can also be updated dynamically. Preliminary experimental results showed that all test queries began to generate results instantly, and many completed instantly too. The proposed methodology can be very useful for queries with complex joins of large relations as there are fewer or even no relations or intermediate results needed to be retrieved, generated.
Authors: Mohammed Hamdi, Sarah Alswedani, Feng Yu, Wen-Chi Hou Full Text
Keywords: Query processing; join queries; equi-join; semi-join; outer-join; anti-join; set operations
Abstract: This work investigated using n-grams, parts-of-speech and support vector machines for detecting the customer intents in the user generated contents. The work demonstrated a system of categorization of customer intents that is concise and useful for business purposes. We examined possible sources of text posts to be analyzed using three text mining algorithms. We presented the three algorithms and the results of testing them in detecting different six intents. This work established that intent detection can be performed on text posts with approximately 61% accuracy.
Authors: Samantha Akulick, El Sayed Mahmoud Full Text
Keywords: Intent detection; text mining; support vector machines; N-grams; parts of speech
Abstract: The explosion in Big Data Analytics research provides a massive amount of software capabilities, publications, and conference proceedings making it difficult to sift through and inter-relate it all. A vast amount of new terminology and professional jargon has been created and adopted for use in the recent years. It is not only important to comprehend the meaning of terms, but also understand how they contrast and synergize amongst each another. This paper serves to address the need of building a consistent vocabulary for the newly growing domain of Big Data Analytics. Understanding and adoption of common consistent vocabulary promotes interdisciplinary communication and collaboration and removes entrance barriers for anyone entering the growing world of Big Data Analytics. Using a step-by-step algorithm based on the bibliometric and content analyses of existing peer-reviewed literature, a sample Big Data Analytics vocabulary is built. The approach includes storing terms in the relational database and being able to retrieve and visualize co-related terms thus establishing connections between them. The step-by-step procedure described in the paper involves: 1) collection of data; 2) data manipulation such as elimination of duplicates, identification of synonyms, grammar forms of the same root words and variations in spelling; 3) calculation of frequency of use of the same term; and finally 4) generation of various reports including most frequently used terms per paper or per narrower category, or - in future release – identification of most likely category based on the combination of co-located terms. For this current proof of concept effort, due to complexities and exceptions when dealing with natural language (English), some steps of this process cannot be fully automated, and hence require manual verification or adjustment, although considerable effort was made to minimize the amount of human intervention. The procedure can be repeated periodically with relative ease to observe and report possible changes in the dynamic field of Big Data Analytics and discover newly created vocabulary. Big Data Analytics was chosen for this project because of its characteristics of a not yet thoroughly documented but fast growing field with critical mass of published works already accumulated. This paper hopes to help with creation of educational materials and demarcation of the domain, while encouraging full research coverage in Big Data Analytics, by promoting discovery and articulation of common principles and solutions.
Authors: Lyublyana Turiy Full Text
Keywords: Big data analytics; domain; controlled vocabulary; keyword; content analysis
Abstract: The amount of information available on new tech-nologies has risen sharply in recent years. In turn, this has increased interest in automated tools to mine this information for useful insights into technology trends, with a particular focus on locating emerging, breakthrough technologies. This paper first outlines an automated framework for technology forecasting developed for the Department of Defense. It then proposes various enhancements to this framework, focusing in particular on utilizing social media data more effectively. Specific topics covered include technology forecasting via Twitter trusted sources and via identification of authoritative Twitter handles. Beyond improving the framework itself, the techniques described in this paper may also be of general interest to researchers using social media data, particularly for technology forecasting.
Authors: Anthony Breitzman, Patrick Thomas Full Text
Keywords: Twitter mining; text mining; emerging technologies
Abstract: Festivals are an important leisure activity in the life of human beings. As a matter of fact, the organizers of festivals are interested in offering quality activities that allow them to position themselves in the entertainment market. To achieve this aim, the organizers use surveys to obtain a global opinion of the participants focusing on three key points: motivation, perception, and valuation. This method is tedious to perform and is time-consuming. In this effort, we present a complete process that enables to automatically obtaining an overall appreciation of a festival from tweets shared by participants. The aim of this contribution is to replace the surveys by the textual analysis of messages posted on social networks. The precision obtained in our experiments highlight the relevance of our proposal.
Authors: Juandiego Morzan-Samame, Miguel Nunez-del-Prado, Hugo Alatrista-Salas Full Text
Keywords: Sentiment analysis; machine learning algorithms; festival; survey analysis
Abstract: We introduce the PseudoGravity tool, an automated social media system that establishes a social media presence in the area of interest of a target audience, identifies target users that are open to connect, engages with them, and elicits a complex response and time investment from them. In this work, we use Twitter as the social media platform and an extensive survey as the activity requiring time investment. We evaluate the tool by using it to find and survey a challenging target – science fiction authors – and compare its results with other methods of automated online surveys. In 28 months, the Twitter account managed by the tool achieved more than 12,000 followers, and achieved monthly Tweet Impressions of more than 250,000. The tool also achieved a high survey response rate of 71% and a completion rate of 83% compared to 30% and 47% achieved by typical online surveys, and high numbers of words and characters entered for questions that required free text input. In addition, this work successfully surveyed more than 500 science fiction writers and gained new understandings of the challenges that e-publishing is bringing to their profession.
Authors: Soo Ling Lim, Peter J Bentley Full Text
Keywords: Social media; Twitter; artificial intelligence; automated engagement; marketing; online survey; e-publishing
Abstract: The cloud infrastructures afford a proper environ-ment for the execution of large-scale big data application. The scheduling of a substantial number of tasks in the heterogeneous multi-tenant cloud environment is one of the most significant research challenges in the current era. The major challenges of task allocation are to optimize the overall completion time, cost of execution, tardiness and utilize idle resources of cloud effectively. In this paper, we have proposed a novel scheduling algorithm for task allocation of the cloud resources to optimize the overall execution time by minimizing response time. In order to find the effectiveness of our proposed algorithm, we have compared our solution with six standard competing algorithms for the optimization of performance metrics in the cloud environment. The results confirm that our proposed algorithm operates better than the other state-of-the-art algorithms in terms of response time (allocation time), makespan and the total execution time.
Authors: Sambit Kumar Mishra, P. Satya Manikyam, Bibhudatta Sahoo, Mohammad S. Obaidat, Deepak Puthal, Mahardhika Pratama Full Text
Keywords: Scheduling; cloud computing; task allocation; allo-cation time; execution time
Abstract: Internet of Things (IoT) services are unstoppably demanding more computing and storage resources. Aligned to this trend, cloud and fog computing came up as the proper paradigms meeting such IoT services demands. More recently, a new paradigm, so-called fog to cloud (F2C) computing, promises to make the most out of both Fog and Cloud, paving the way to new IoT services development. Nevertheless, the benefits of F2C architectures may be diminished by failures affecting the computing commodities. In order to withstand possible failures, the design of novel protection strategies, specifically designed for distributed computing scenarios is required. In this paper, we study the impact of distinct protection strategies on several key performance aspects, including service response time, and usage of computing resources. Numerical results indicate that under distinct failure scenarios, F2C significantly outperforms the conventional cloud.
Authors: Vitor Barbosa Souza, Wilson Ramírez, Xavier Masip-Bruin, Eva Marín-Tordera, Sergio Sánchez-López, Guang-Jie Ren Full Text
Keywords: Cloud computing; fog computing; fog-to-cloud computing; Internet of Things; service protection
Abstract: The ways of development of the first computers to modern computers, which are assembled from transistors, chips, integrated microcircuits on crystals, etc. are considered. New substances, materials, electronic devices, manipulators from new micro Nano elements are created. It is noted that computer technologies provide people communication (mail, video, Skype, etc.), improve the everyday life of each member of the world community. The analysis of the development of computer engineering is given and the technologies of modern supercomputers in the future are determined. A number of operating supercomputers Simulation and Digital Manufacturing in industry, energy, transport, and construction are presented. The ways of development of the Internet, technologies lines of life cycle, ontologies, smart products in the fields of e-science (biology, genetics, physics, medicine, energy, etc.) and industry are discussed.
Authors: E.M. Lavrischeva, I.B. Petrov Full Text
Keywords: Nano elements; computer technology; engineering; technology systems; assembling modules; molecules; small computers; internet thing
Abstract: Parallel computation, an extension to multi-programming architectures usually structured as tightly coupled organization of multiple CPU cores. Systems under such configurations require lot of effort to manage multiple tasks simultaneously. Operating systems for such hardware follows several real time constraints in order to enhance system performance. Normally, Operating system designates one processor from others as controller which acts as a load scheduler for the others and performs balancing when system performance degrades due to overloading of some of the CPU cores. Regardless of tightly coupled system, another cost effective organization to achieve parallelism is to interconnect multi-computers as a network of cluster. The advantage of this loosely coupled system is that system is under programmatic control. Low level sockets connections are created to make machine to machine communication possible. This work focuses on Multi-Ethernet Wired LAN Cluster (MEWC) and Broadcasting Wireless Access LAN cluster (BWAC) for executing multiple tasks like a grid. Further, the work analyzes both wired and wireless clusters along with some factors considered in communication network. Wireless network of multi-computers have the advantage of transmission speed. In wireless cluster, enhanced data transmission speed is achieved because of the effect of broadcasting links, where wired network survive on single communication link; although Multi-Ethernet distribution may be followed for the improvement. But still wireless LAN gives many advantages.
Authors: Deepak Sharma Full Text
Keywords: Parallel cluster; wireless communication network; broadcasting data; access point; multi-computers; Multiple Ethernet Interface Card
Abstract: Cloud computing aims to deliver more energy efficient computing provision. The potential advantages are primarily based on the opportunities to achieve economies of scale through resource sharing: in particular, by concentrating data storage and processing within data centres, where energy efficiency and measurement are well-established activities. However, this addresses only a part of the overall energy cost of the totality of the cloud because energy is also required to power the networking connections and the end user systems through which access to the data centre is provided. The impact of application software behaviour on the overall system’s energy use within a cloud is less understood. This is of particular concern when one considers the current trend towards “off the shelf” applications accessed from application stores. This mass market for complete applications, or code segments which are included within other applications, creates a very real need for that code to be as efficient as possible, since even small inefficiencies when massively duplicated will result in significant energy loss. This position paper identifies this problem and proposes a supporting tool which will indicate to software developers the energy efficiency of their software as it is developed. Fundamental to the delivery of any workable solution is the measurement and selection of suitable metrics, we propose appropriate metrics and indicate how they may be derived and applied within our proposed system. Addressing the potential cost of application development is fundamental to achieving energy saving within the cloud – particularly as the application store model gains acceptance.
Authors: Ah-Lian Kor, Colin Pattinson Full Text
Keywords: Energy efficiency; green computing; programming model; energy efficient cloud
Abstract: A task graph is an intuitive way to represent the execution of parallel processes in many modern computing platforms. It can also be used for performance modeling and simulation in a network of computers. Common implementation of task graphs usually involves a form of message passing protocol, which depends on a standard message passing library in the existing operating system. Not every emerging platform has such support from mainstream operating systems. For example the Spiking Neural Network Architecture (SpiNNaker) system, which is a neuromorphic computer originally intended as a brain-style information processing system. As a massive many-core com-puting system, SpiNNaker not only offers abundant processing resources, but also a low-power and flexible application-oriented platform. In this paper, we present an efficient mapping strategy for a task graph on a SpiNNaker machine. The method relies on the existing low-level SpiNNaker’s kernel that provides the direct access to the SpiNNaker elements. As a result, a fault tolerant aware task graph framework suitable for high per-formance computing can be achieved. The experimental results show that SpiNNaker offers very low communication latency and demonstrate that the mapping strategy is suitable for large task graph networks.
Authors: Indar Sugiarto, Pedro Campos, Nizar Dahir, Gianluca Tempesti, Steve Furber Full Text
Keywords: Task graph; mapping; neuromorphic; Spiking Neu-ral Network Architecture (SpiNNaker)
Abstract: Cloud-based data centers consume a considerable amount of energy, which is an expensive system. The virtualization technique helps to overcome various issues including the energy issue. Because of the dynamic nature of workload, task consolidation is an effective technique to decrease the total number of servers and unnecessary migrations and consequently optimize energy. Effective task allocation techniques act as a key issue to optimize several performance parameters in the cloud system. This paper presents a novel task consolidation technique to achieve energy-makespan-throughput optimally balanced in the cloud data center. We evaluate the performance of our proposed algorithm using simulation analysis in Java-based CloudSim simulator environments. Results of performance evaluation certify that our proposed algorithm has reduced the energy consumption as compared to existing standard algorithms, and optimized the makespan and throughput of the cloud data center.
Authors: Sambit Kumar Mishra, Saurabh Kumar Choudhary, Bibhudatta Sahoo, Mahardhika Pratama, Mohammad S. Obaidat, Deepak Puthal Full Text
Keywords: Cloud computing; task scheduling; energy consumption; makespan; throughput; simulation
Abstract: As Quantum Annealing Computers (QACs) like D-Wave 2000Q Adiabatic Quantum Systems emerge, we aim to investigate the potential synergy between QAC and HPC as we push toward exascale supercomputing, where manual parallel programming for millions of processor cores will become inhibitive. Quantum Refactoring is proposed here only as a possible future concept (not yet implemented on QACs) to automatically tweak the code sequence more efficiently than through repeated manual pair-wise operation swaps to optimize computation speed, memory storage, hit ratio, cost, reliability, power and/or energy saving. To facilitate auto code refactoring suitable for such annealing optimization, a self-organizing matrix transform is proposed in this paper, so that QAC can be applied to auto code sequence permutation via computation matrix transform model toward optimized matching between computation and parallel processor cores. The mathematical model to achieve these goals is through the causal set properties of Matrix Model of Computation (MMC). A sequence of transformations act as the code refactoring to compact code regions as computation decomposition for parallel multi-core/multi-tread execution. Besides the improved software/hardware matching, the self-organizing matrix approach also serve as a novel paradigm for auto parallel programming, as well as a systematic tool for formal design modeling.
Authors: Liwen Shih, Hon Lum Full Text
Keywords: Automatic Software-Hardware Resource Mapping; Self-Organized Code Refactoring; Permutation; Causal Matrix Model of Computation; Data-Flow Discovery; Quantum Annealing Optimization
Abstract: As the arrival of Adiabatic Quantum Annealing Computers (QACs) era, there are ample opportunities to search for the missing links between QACs and HPC (High-Performance Computing). QACs such as D-Wave 2000Q Systems are analog Quantum Annealers capable of instantly zooming in to optimal solutions. We are optimistically aiming at broadening the perspective and impact of QAC by harvesting QAC progress to potentially benefit most every HPC application by optimizing the software/hardware mapping. The narrowed-down, fittest processor schedules found through quantum (or hybrid classical simulated) annealing search enhancement, can then be further compared and fine-tuned to run computation more efficiently in production mode on target HPC systems. With our novel perspective of linking QAC and HPC for a broader application impact, we hope to encourage more and various developments of emerging quantum computer endeavors to eventually make the most of manual tweaking of various problem solving, including parallel programming, unnecessary.
Authors: Liwen Shih Full Text
Keywords: Quantum annealing; adaptive parallel software/hardware mapping; algorithm-specific processor scheduling; topology-aware network scheduling; optimization
Abstract: From the teaching processes computerization point of view, there is an absence of state-of-the-art approaches to their automation derived from “knowledge” as the key element of teaching and learning (including human knowledge transmission taking place in the framework of communication and feedback). This contribution presents such an approach, dealing with how the automation is solved when computerizing teaching processes. It has been developed within the published long-term research on the technology-enhanced learning and works in the teaching practice (it is applied to teaching bachelors students). The approach is based on the design of the “virtual knowledge unit”, as the default data structure, which is both human and machine readable. This enables the teacher not only to process learning content but also to automate communication and feedback thanks to the possibility to transmit the knowledge, i.e. the teaching and learning content, between off-line and online environments by using the virtual knowledge unit. The virtual knowledge joins isomorphically the mental processes of humans with the physical processes of machines. From the practical point of view, the data structure is handled and controlled by the in-house software BIKEE. This database application enables a teacher to solve any kind of teaching and learning processes tailor made for him. Based on the teaching practice, this approach seems to be beyond the state-of-the-art. This can be concluded because a registered utility model, based on the use of virtual knowledge, is used for the knowledge transfer. Some aspects of teaching processes computerization are discussed regarding the automation of mental processes. In this context, the actual research is focused on development of an educational robot.
Authors: Stefan Svetsky, Oliver Moravcik Full Text
Keywords: automation of teaching processes; knowledge representation; technology-enhanced learning; educational technology; Cybernetics
Abstract: The flipped classroom is gaining popularity as a teaching strategy that allows instructors to create an active learning environment. It focuses the responsibility of learning on the students and changes their role from listeners to learners. In a previous paper the authors presented an example of a flipped-classroom approach to a one-semester “Fundamentals of Digital Design” required course for Electrical and Computer Engineering majors in order to lower its failure rate and to further motivate students so as to improve overall attrition. The authors used the LivescribeTM paper-based computing platform which consists of a digital pen, AnotoTM digital paper, software applications, and developer tools to create the online recorded lectures and problems which were uploaded to “Blackboard” for students to view and solve at home. The authors used this technology as well as the concept of “Just-in-Time Teaching” (JiTT) to provide the “feedback loop” to affect what happens during the subsequent in-class time together. The authors concluded that while the flipped version of EENG 125 “Fundamentals of Digital Logic” succeeded in improving student retention and while the approach was popular with students, with respect to class averages and standard deviations, the results were not much better than in a traditional classroom which incorporated a high level of active learning activities. As a result, the authors decided to incorporate student groupings that are heterogeneous, so as to provide each student an opportunity to work through problems both independently and in collaboration with their peers as well as Out-of-Class Assessment Techniques (OoCATs) such as the “Minute Paper” and the “Muddiest Point” to provide the authors with useful feedback with regard to the recorded lectures and problem solving assignments. The authors would then assess this new flipped version of EENG 125 with the traditional and active learning versions of the course.
Authors: Steven Billis, Nada Anid Full Text
Keywords: Just-in-Time Teaching (JiTT); LivescribeTM; minute paper; muddiest point
Abstract: Student’s motivation and engagement difficulties are present in higher education. Between many technologies to increase student motivation and engagement, we found that Gamification technique is the most suitable case. This paper presents our experiment of using Gamification in learning process and based on the use of the Agile methodology in-order to obtain the best results and engagements from the students. Applying Gamification in software engineering is not as straight to move as it may appear. Current research in the area has already recog-nized the possible use of Gamification in the context of software development. It is still an open area of research about how to design and use Gamification in this context. Higher education universities, especially in the Middle East are sometimes facing problems to get students engagement and motivation as a group structure. This paper supports the proposed idea; we presented a preliminary experiment that shows the effect of gamification on the performance of students involvement in a funded project from TRC (The Research Council) in Sultanate of Oman.
Authors: Rula Al Azawi, Dawood Al Ghatarifi, Aladdin Ayesh Full Text
Keywords: Gamification; Agile methodology; mobile applica-tion
Abstract: This paper aims to propose a framework to implement a ubiquitous learning service based on software -defined television (Sw-de TV) under the approach of software-defined everything and cloud computing. The lack of u-learning frameworks and the little convergence of infrastructure and flexibility in educational contexts are some challenges to overcome. Here, we present the general framework and an experimental test. The experimental results indicated a satisfactory performance of the video display for different screens, and a very high relevance to be applied in an educational context. One of the test conclusions is that video processing platforms defined by software offer more scalability and flexibility than a conventional television (TV) infrastructure. Such platforms make possible to adapt content to different screens, favoring the implementation of a ubiquitous learning service in which users can choose the moment, place and device in order to perform a learning activity, having video as its main content.
Authors: Gustavo Moreno López, Jovani Alberto Jiménez Builes Full Text
Keywords: U-learning; cloud computing; framework; software –defined TV; multi-screen
Abstract: This paper presents an interactive virtual breadboard system that provides automated guidance to electrical engineering students working on electronic circuits labs. The primary contribution of the paper is the unique invariant representation of the state of the breadboard, which enables instructors to develop their own lab assignments with a set of customized hints. The paper describes the invariant state representation and its implementation in the virtual breadboard system. It also presents results of a pilot test demonstrating that using this system achieves the goal of reducing the workload of teaching assistants.
Authors: Lori L. Scarlatos, Ahmad R. Pratama, Tatiana T. Tchoubar Full Text
Keywords: Virtual breadboard; circuit design; guide-on-the-side; distance learning; digital hints; user interface
Abstract: Attendance monitoring systems (AMS) are important educational systems used to monitor student attendance and interest in a given course. The expected benefits of AMS are, among others, to improve student engagement, performance, and retention. The functionalities of most traditional AMS are however limited to recording and reporting attendance. Beyond these, they provide little or no other functionalities that are capable of streamlining student engagement, performance, and retention. To fully realize their expected benefits and meet contemporary pedagogical needs, traditional AMS can benefit from extended innovative functionalities such as ‘Automatic Disengagement Notification’ and ‘Attendance Grader’. But the implementation of these functionalities would depend on predefined systems requirements, which unfortunately are very scare, if available at all, in extant software engineering literature. The significant amount of work, resources, and cost required to develop systems requirements, especially for the optimization of expected benefits, can discourage software/education systems developers from developing such innovative functionalities. We contribute to addressing these limitations by identifying, modeling and describing functional requirements for an integrated AMS. These requirements can be adopted and re-used by AMS developers, and thus reduce the time, cost and other resources expended in requirements development.
Authors: Joshua C. Nwokeji, Ayodele Olagunju, Anugu Apoorva, Steve Frezza, Mei-Huei Tang Full Text
Keywords: Learning management system; attendance management systems; feature tree model; student engagement
Abstract: This manuscript deals with the design and expansion of a highly successful cybersecurity program in a minority serving university through contemporary education and training methods in biosecurity for students who major in other disciplines, such as biological systems engineering and biotechnology. The key efforts will focus on curriculum development by means of a hands-on laboratory-based instruction as well as research-based technological development for biosecurity. This is accomplished by collaborative work involving two academic departments, namely, Computer & Information Sciences (CIS), and Biological Systems Engineering (BSE). The product is a cross-disciplinary concentration in biosecurity leading to a professional certification, which is ideal for undergraduate students and professionals with a biology background. Although this approach is modelled for a minority serving university, it can be replicated in other institutions of higher learning as well.
Authors: Hongmei Chi, Satyanarayan Dev, Aavudai Anandhi Full Text
Keywords: Biosecurity; biosafety; cybersecurity; information assurance; active learning
Abstract: This article describes the development of a prototype of a metamodel for Personal Knowledge Management (GCP), which is defined and implemented based on the “Lessons Learned”, registered on a social network for mass use. The functional architecture is applied in the implementation of a registration system and personal lessons learned in the cloud, through a social network: Facebook. The process begins with the acquisition of data from the connection to a non-relational database (NoSQL) in which it has set up a complementary analysis algorithm for the semantic analysis of the recorded information, on the lessons learned and thus study the generation of Organizational Knowledge Management (GCO) from the GCP. The end result is the actual implementation of a functional architecture to integrate a web 2.0 application and an algorithm of semantic analysis from unstructured information using techniques of machine learning and to demonstrate a way to make management of organizational knowledge through personal knowledge management.
Authors: Lopez-Quintero JF, Montenegro-Marín C., Medina V.H., Y.V. Nieto Full Text
Keywords: Knowledge management; personnel knowledge management; lessons learned; semantic analysis; cloud computing; social networks; machine learning
Abstract: Keeping in mind that existing problems of conventional quantum mechanics could happen due of a wrong mathematical structure, I suggest an alternative basic structure. The critical part of it is modifying commonly used terms “state”, “observable”, “measurement” and giving them a clear unambiguous definition. This concrete definition, along with the use of a variable complex plane, is quite natural in geometric algebra terms and helps establish a feasible language for quantum computing. The suggested approach is then used in a fiber optics quantum information transferring/processing scenario.
Authors: Alexander SOIGUINE Full Text
Keywords: Quantum computing; geometric algebra; states; observables; measurements; light polarization
Abstract: New technologies and possibilities summarized under the keyword Digital Transformation induce massive changes in the organizations’ processes and will lead to tremendous challenges for almost all businesses. Existing business models will be expanded or replaced by digitally driven services. The Industry 4.0 with its autonomous cyber physical production systems and its intelligent products require holistic approaches for and will lead to sustainable changes in the industrial manufacturing. This paper addresses the main fields of Industry 4.0 and Digital Transformation as well as their core concepts. Further the paper describes the key success factors: interoperability and the organizations’ ability for innovations. These key success factors are illustrated by three case studies regarding the fields of information supply in Facility Management, digital logistics and intrapreneurial behavior as an important source for intra-organizational innovations.
Authors: habil. Christian-Andreas Schumann, Jens Baum, Eric Forkel, Frank Otto, Kevin Reuther Full Text
Keywords: Digitalization; Digital Transformation; industry 4.0; interoperability; semantic database; quality assurance; innovation
Abstract: Driverless-car is a technology from the future that seems to be a reality sooner than expected. All countries around the world are assessing their readiness to adopt the change. For many, it is a technology that promises safer roads and better environment, and a huge economical shift. For Saudi women, this technology might be the solution to a social dilemma that prevents women from driving themselves. This paper explores Saudi Arabia’s readiness for Driverless-cars by highlighting the unique women driving ban situation. More research needs to be conducted to arrive to conclusions regarding how Saudi’s will receive Driverless-cars.
Authors: Mariam Elhussein, Mohammed Alqahtani Full Text
Keywords: Driverless-cars; social computing; technology readiness; autonomous cars
Abstract: By new manufacturing processes, materials and/or environmental conditions new machine’s parameters are needed. In general, finding the right machines’ parameters of the machine is achieved by a long work effort and high costs. It is important to find the right machines’ parameters of a machine to execute every manufacturing process accurately. Know-how of companies is characterized by developing the appropriate machines’ parameters. A third party can develop a new business model based on a managed exchange of this know-how between companies. Thereby, an online marketplace is provided for trading machines’ parameters. As a result, the revenues of these companies are increased because a new added-value is generated based on existing resources, namely, the previously developed and used machines’ parameters. At the same time, the high costs of finding the right machines’ parameters could be saved by purchasing them on the marketplace. These innovative ideas can be realized only by ensuring the security of the traded machines’ parameters. In this paper, a reliable, innovative business model for online trading of machines’ parameters in the automation and manufacturing sector is presented. It is based on the concept of B2B e-business. Thereby, it is essential to determine a usage control policy of traded machines’ parameters even after purchasing. Thus, operating of these purchased machines’ parameters on the machine is regulated by using different licensing models. Licenses are generated according to order’s conditions for a specific machine after finishing purchasing processes on the marketplace. Before executing purchased machines’ parameters on the machine, a verification process of the machine’s ID and parameters’ validity is required. The protection requirements over the whole trading process, as well as many security measurements for transmitting the machines’ parameters to customers, are demonstrated in this paper.
Authors: Ghaidaa Shaabany, Reiner Anderl Full Text
Keywords: Manufacturing processes; machines’ parameters; new added-value; B2B business; usage control policy; licensing models; security; protection measurements; online marketplace; business model; electronic goods
Abstract: Crop that are currently underutilized can play a major role in diversifying food sources and combating climate variability. One major obstacle for wider adoption of these species is the lack of information on the geographic areas where these crops are currently grown. These crops are typically grown in marginal lands through subsistence agriculture. At present, there is no global database and no efficient procedure that allows users to acquire cropping patterns of underutilised crops. The proposed solution identifies underutilized cropping patterns using online search engine data. The target is to determine global public interest in underutilised crops over time through search engine data. This identifies possible crop utilisation patterns, trends and interest pertaining to underutilised crops over time. As a proof of concept, we collected a set of keyword synonymous of Bambara groundnut (BG), from local and international databases and research publications. Using the Google AdWords service and 40 different terms for BG, we were able to gather search event data for two years, at the city level. Preliminary analysis done through a software prototype shows that the data provides new insights as to how BG search events are distributed and how this data can be used to delineate current areas where BG grows and what are the characteristics of their value chains. For evaluation purposes, we compared our BG results with the crop’s known network of BG growers and researchers, and confirmed that the results not only matched known regions of the network but also proposed several new ones that need to be evaluated. The results suggest that the proposed solution provides significant indicators for possible cropping patterns and/or research interests around the world.
Authors: Ayman Salama Mohamed, Ebrahim Jahanshiri, Tomas Henrique Maul Full Text
Keywords: Underutilised crops; data mining; Google keywords; cropping patterns
Abstract: The technology advancement has given a more enlightened perspective on developing recent solutions. The advancement of imitating the human brain has achieved some milestones that are providing promising results. However, the technicalities and reliance between different computing entities still remains a concern. This research provides a module based framework that can create a brain or experience imitation for itself using the reinforcement learning agents. The research has established a framework design that can connect and validate user requirements and map it according to functionality with reasonable data retrieval and performance measures are observed. Data is a concern for modern technologies, so is the customer requirements. This research combines the two widest entities of research and brings about a design framework that allows the agent to communicate using knowledge from data and instruction or queries in the forms of user requests or business requirements. The experience generator than enhances the next performance and lessens the cost, expense and improves the overall performance of two widely separated modules with reinforcement learning and experience or knowledge base.
Authors: Mazhar Hameed, Hiba Khalid, Usman Qamar Full Text
Keywords: Big data; Online Analytical Processing (OLAP); Online Transactional Processing (OLTP); contextual query; scientific data management
Abstract: The paper presents a novel technique for determination of loss coefficients due to pressure by use of artificial neural network (ANN) in tee junctions. Geometry and flow parameters are feed into ANN as the inputs for purpose of training the network. Efficacy of the network is demonstrated by comparison of the ANN and experimentally obtained pressure loss coefficients for combining flows in a Tee Junction. Reynolds numbers ranging from 200 to 14000 and discharge ratios varying from minimum to maximum flow for calculation of pressure loss coefficients have been used. Pressure loss coefficients calculated using ANN are compared to the models from literature used in junction flows. The results achieved after the application of ANN agrees reasonably to the experimental values.
Authors: Shahzad Yousaf, Imran Shafi, Jamil Ahmad Full Text
Keywords: Artificial neural network; pressure loss coefficients for solar collector; combining flow
Abstract: Context and motivation: Elicit mean to gather, acquire or to extract while requirements elicitation mean to gather or discover requirements. The activity is performed to determine the system requirements from stakeholders, system documents, domain knowledge and from other requirements sources. Question/problem: Requirements engineering is the most important part for a successful software system because here we come to a decision, ‘what’ is going to be built. Wrong decisions during this phase will have negative impact on the final product. Idea: The objective is to develop better understanding of requirements. In the start of requirements engineering process we have only few requirements (along with system vision statements) but at the end of this activity most of the requirements or in ideal scenarios all requirements need to be known at the appropriate level. The idea is to propose an integrative goal-quality model for requirements. The success of software product is highly dependent on Non-functional Requirements (NFR). In this paper an integrative goal model of influencing factors is presented. This helps to guide the tailoring of software quality model which is based on various project requirements, organizational needs, individual goals of developers and constraints of the environment. Contribution: The influencing factors help to integrate goal model to quality model and therefore helps in a systematic elicitation of project specific requirements.
Authors: Arfan Mansoor, Detlef Streitferdt, Elena Rozova, Qaiser Abbas Full Text
Keywords: Requirements; goal models; quality models; meta model
Abstract: Companies that hire software developers face challenges hiring employees. As the cost of hiring developers can cost up to hundreds of thousands of dollars, depending on expertise of the desired candidate, companies must rely on a multi-method approach to secure competent developers. This paper evaluates the “Herbert” test, an algorithmic problem -solving program that is currently being used to help evaluate potential software developers at Fast Track, a web development company. There appears to be a positive correlation between performance on the test and subsequently on the job. This longitudinal study reaffirms the need for a multi-method approach to hiring with a special emphasis on problem-solving. Furthermore, as technical systems continue to emerge, increasing the demand for technical jobs, testing tools like “Herbert” are becoming more relevant.
Authors: Soraya Cardenas Full Text
Keywords: Algorithms; evaluations; problem-solving and hiring software developers
Abstract: Real time applications are one of the fastest growing applications in the market due to their popularity, business value and the fact that web is their native environment. As a result, enhancing the performance of these applications has always been a concern for the IT industry. In this research, we took a closer look on the effect of design patterns on the performance of these applications using simulations as a research method. Two of the design patterns used by the researchers, namely, the Observer and the State design patterns, proved to be more effective in terms of software efficiency.
Authors: Muhammad Ehsan Rana, Wan Nurhayati Wan Ab. Rahman Full Text
Keywords: Design patterns; real time software; real time applications; software performance; software efficiency
Abstract: Collocation is almost always a preferred alternative compared to Distributed. It makes sense that collocated team members are more likely to perform better than distributed team members. However, in today’s real world the distributed nature is either the norm or quickly becoming the norm. That is not to say that collocation no longer exists, but rather it is becoming less and less pronounced. Pair programming is a technique that can be performed in a collocated or distributed fashion. Not all software development projects use this practice. The projects that do undertake this programming method typically perform collocated or distributed pair programming, but very rarely use both. This paper examines a project where both types of pair programming were used. At the completion of the project, all developers were asked to complete a survey. The results of the survey allowed us to compare various attributes of collocated and distributed pair programming. What may be surprising is that in some cases the differences between the two are minimal.
Authors: Mark Rajpal Full Text
Keywords: Agile; scrum; pair programming; extreme programming
Abstract: As the mobile world continues to expand an Internet of Things and Networks of Everything, we find that our lives, while becoming more convenient also come with ties. These ties are based on many interconnections between people and software, and it is critical to ensure that we trust these ties in the software that we use. Modeling, evaluating, and improving the end user’s trust in the mobile apps requires systematic frameworks and strategies. To this end, we are proposing an adaptable and flexible quality model and framework – borrowing from general accepted ISO 25010 modeling concepts while enhancing our previous work in quality in use modeling – by representing specific system quality characteristics that may influence trust from the quality in use standpoint. The resulting trust modeling framework can be used for evaluation and improvement of trust targeted for different mobile apps.
Authors: Philip Lew, Luis Olsina Full Text
Keywords: Trust; quality model; system/system-in-use quality view; mobile user experience
Abstract: Transmission pipeline systems metamodeling is simply reengineering pre-constructed notations and abstractions of the pipeline engineering domain in a form that offer expressive power for the domain expert to create designs that suits the intended transmission pipeline project. The required formality that can provide such expressive power is a domain specific language (DSL), the domain specific modeling approach, therefore is adopted to create a domain specific platform where the specification primitives represent abstractions and conceptual modeling processes in the design and implementation of transmission pipeline configurations. Domain specific languages, which are centered on meta-modeling raises the level of abstraction beyond programming by specifying the solution directly using domain concepts. The conceptual DSL definition brings to bear domain abstractions, and expressive power restricted to, the domain of transmission pipelines for the related products in the petroleum industry and in water supply. Consequently this can be achieved only by taking advantage of specific properties of the pipeline engineering application domain that pertain to transmission. The description of these specific properties therefore represents the domain concepts, which will be useful in creating the abstractions and in the semantic mappings of the elements of the DSL modeling platform.
Authors: Bunakiye R. Japheth, Acheme I. David Full Text
Keywords: Formal specifications; semantic mappings; petroleum industry; pipeline design; modeling platform
Abstract: Cloud computing abstracts the underlying hard-ware details from the user. As long as the customer Service Level Agreements (SLA) are satisfied, cloud providers and operators are free to make infrastructural decisions to optimize business objectives, such as operational efficiency of cloud data centers. By adopting a holistic view of the data center and treating it as a single system, a cloud provider can migrate application compo-nents and virtual machines within the system according to policies such as load balancing and power consumption. We contribute to this vision by removing architectural barriers for workload migration and reducing the downtime of migrating processes. We combine the post-copy approach to workload migration with a novel specialized low latency interconnect for handling resulting remote page faults. In this work, we introduce a cross-architecture workload migration system, specify the requirements towards the specialized interconnect, discuss design trade-offs and issues, and present our proposed SW-HW co-design.
Authors: Joel Nider, Mike Rapoport, Yiftach Binyamini Full Text
Keywords: cloud; post-copy; migration; page fault; low latency; interconnect
Abstract: This paper presents a review of trends and challenges in collaborative Software Engineering. Due to the nature and size of large-scale Software Engineering projects, effective collaboration is important and necessary. Hence, it is not uncommon to see the adoption of a remix of practices, models, methodologies, tools and skills. However, this remix, alongside adoption of emerging paradigms such as Cloud computing, results in factors that undermine collaborative Software Engineering projects. This paper aims to provide a systematic review and analysis of existing trends, models and challenges. This is with a view towards fostering better understanding of factors undermining the collaborative Software Engineering process, as well as, helps to identify motivations, gaps, and issues pertinent to this research area for a more effective process in the Cloud. A systematic approach was employed in this research. This approach is instrumental to identifying relevant primary studies. Its design provides a means for continuity in terms of any future extension to this review.
Authors: Stanley Ewenike, Elhadj Benkhelifa, Claude Chibelushi Full Text
Keywords: Collaborative software engineering; software development process; models; trends; cloud computing; collaboration; systematic review
Abstract: The unstoppable adoption of cloud and fog computing is paving the way to developing innovative services, some requiring features not yet covered by either fog or cloud computing. Simultaneously, nowadays technology evolution is easing the monitoring of any kind of infrastructure, be it large or small, private or public, static or dynamic. The fog-to-cloud computing (F2C) paradigm recently came up to support foreseen and unforeseen services demands while simultaneously benefiting from the smart capacities of the edge devices. Inherited from cloud and fog computing, a challenging aspect in F2C is security provisioning. Unfortunately, security strategies employed by cloud computing require computation power not supported by devices at the edge of the network, whereas security strategies in fog are yet on their infancy. Put this way, in this paper we propose Software Defined Network (SDN)-based security management architecture based on a master/slave strategy. The proposed architecture is conceptually applied to a critical infrastructure (CI) scenario, thus analyzing the benefits F2C may bring for security provisioning in CIs.
Authors: Sarang Kahvazadeh, Vitor B. Souza, Xavi Masip-Bruin, Eva Marín-Tordera, Jordi Garcia, Rodrigo Diaz Full Text
Keywords: IoT; cloud computing; fog computing; fog-to-cloud computing; security; Software Defined Network (SDN); critical infrastructures
Abstract: As the demand for biometric technology grows its very implementation appears poised for broader use and increased concerns with regard to privacy have been raised. Biometric recognition is promoted in a variety of private and government domains, helping to identify individuals or criminals, provide access control systems to enable efficient access to services, helping keep patient data safe amongst other functions. However, new advances in biometrics have brought forth widespread debate amongst researchers with concerns surrounding the effectiveness and management of biometric systems. Further questions arise about their appropriateness and societal impacts of use. This review begins by providing an overview of past and present biometric technological uses and the serious problems they pose to privacy. It then factors that play a part in the implementation of privacy in biometrics. The cultural differences that affect legislative approaches are explored, through comparing the approaches adopted by the European Union and the United States. Furthermore, possible methods of remediating the concerns raised by the implementation of biometrics are discussed. It is concluded that Governments and organisations must be transparent and cooperate with legislators, this combined effort may eliminate many of the perceived risks in the technology and help elucidate clearer methods for governing biometrics, to ensure that future developments hold privacy at a high regard.
Authors: Zibusiso Dewa Full Text
Keywords: Biometric technology; privacy; legislation; evolving practices; invasiveness; conflicting interests; European Union
Abstract: The current authentication systems in smart phones are classified as static or one shot authentication schemes in which the user is validated at a single point. The existing authentication systems cannot recognize the difference between an intruder and a legitimate user if the security credentials like passwords have been leaked. This issue is addressed in continuous authentication schemes where the system constantly monitors the user by different procedures to detect the user as genuine or intruder. Continuous authentications schemes can be deployed using different methods such as behavioral, gestural and facial, etc. In this paper, we critically analyze the different continuous authentication schemes. We evaluate the robustness and failure free operation of each approach. We aim to provide a precise knowledge about different continuous authentications schemes which help the user to determine the appropriateness of the underlying model adapted by each approach.
Authors: Sajjad, Munam Ali Shah, Adnan Zeb, Sana Akram, Hussain Ahmad, Muhammad Sikander Zamir Full Text
Keywords: Continuous authentication; security; mobile sharing; TIPS; SenGuard; SilentSense; GeoTouch; gestures; key strokes
Abstract: Secure signal processing is becoming a de facto model for preserving privacy. We propose a model based on the Fully Homomorphic Encryption (FHE) technique to mitigate security breaches. Our framework provides a method to perform a Fast Fourier Transform (FFT) on a user-specified signal. Using encryption of individual binary values and FHE operations over addition and multiplication, we enable a user to perform the FFT in a fixed point fractional representation in binary. Our approach bounds the error of the implementation to enable user-selectable parameters based on the specific application. We verified our framework against test cases for one dimensional signals and images (two dimensional signals).
Authors: Thomas Shortell, Ali Shokoufandeh Full Text
Keywords: Image processing, computer security, Fast Fourier Transforms
Abstract: This paper presents the implementation of a practical voice recognition system using MATLAB (R2014b) to secure a given user’s system so that only the user may access it. Voice recognition systems have two phases, training and testing. During the training phase, the characteristic features of the speaker are extracted from the speech signal and stored in a database. In the testing phase, the stored audio features of the test voice sample are compared with the voice samples in the database and determined if a match exists. For this research, Mel Frequency Cepstral Coefficients (MFCCs) were chosen to represent the feature vectors of the user’s voice as it accurately simulates the behavior of the human ear. This characteristic of the MFCCs makes them an excellent measure of speaker characteristics. The feature matching process is then performed by subjecting the MFCCs to vector quantization using the LBG (Linde-Buzo-Gray) algorithm. In practical scenarios, noise is a major factor that adversely influences a voice recognition system. The paper addresses this issue by utilizing spectral subtraction to remove environmental noise affecting the speech signal thereby increasing the robustness of the system.
Authors: Ashwin Nair Anil Kumar, Senthil Arumugam Muthukumaraswamy Full Text
Keywords: Speaker identification; voice recognition; mel frequency cepstral coefficients (MFCCs); vector quantization (VQ); spectral subtraction
Abstract: Widely deployed web services facilitate and enrich several applications, such as e-commerce, social networks, and online banking. This study proposes an optical challenge-response user authentication system model based on the One Time Password (OTP) principle (Scan2Pass) that use multifactor authentication and leverage a camera equipped mobile phone of the legitimate user as a secure hardware token. The methodology which is designed and implemented to evaluate the proposed idea will be explored and explained throughout this paper. The chosen method presents a brief overview about the steps required to design an efficient and practical system. Also, the requirements will be discussed as well as our assumption to give a simple yet an adequate understanding about the security of our proposed system in general. Then, an overview about the basic architecture needed for the proposed system to explain the role of the shared secret and the challenge response protocol in order to complete authentication procedure and provide mutual authentication between the user and the server by adopting multi-factors such as time, OTP algorithm by describing the operation flows of users during each phase of this system.
Authors: Hareth Zmezm, Hamzah F. Zmezm, Halizah Basiron, Mustafa S.Khalefa, Hamid Ali Abed Alasadi Full Text
Keywords: Electronic commerce; authentication; one time password; performance and reliability
Abstract: This paper is showcasing the development of an innovative healthcare solution that will allow patient to be monitored remotely. The system utilizes a piezoelectric sheet sensor and XBee wireless communication protocol to collect and transmit heart beat pressure signal from human subject neck to a receiving node. Then, using signal processing techniques a set of important vital parameters such as heart rate, and blood pressure are extracted from the received signal. Those extracted parameters are needed to assess the human subject health continuously and timely. The architecture of our developed system, which enables wireless transmission of the raw acquired physiological signal, has three advantages over existing systems. First, it increases user’s mobility because we employed XBee wireless communication protocol for signal transmission. Second, it increases the system usability since the user has to carry a single unit for signal acquisition while preprocessing is performed remotely. Third, it gives us more flexibility in acquiring various vital parameters with great accuracy since processing is done remotely with powerful computers.
Authors: Mohammed Jalil, Mohamed Al Hamadi, Abdulla Saleh, Omar Al Zaabi, Soha Ahmed, Walid Shakhatreh, Mahmoud Al Ahmad Full Text
Keywords: Piezoelectric; XBee; medical sensors; vital signs; remote health monitoring
Abstract: Search engines have vast technical capabilities to retain Internet search logs for each user and thus present major privacy vulnerabilities to both individuals and organizations in revealing user intent. Additionally, many of the web search privacy enhancing tools available today require that the user trusts a third party, which make confidentiality of user intent even more challenging. The user is left at the mercy of the third party without the control over his or her own privacy. In this article, we suggest a user-centric heuristic, Distortion Search, a web search query privacy methodology that works by the formation of obfuscated search queries via the permutation of query keyword categories, and by strategically applying k-anonymised web navigational clicks on URLs and Ads to generate a distorted user profile and thus providing specific user intent and query confidentiality. We provide empirical results via the evaluation of distorted web search queries in terms of retrieved search results and the resulting web ads from search engines. Preliminary experimental results indicate that web search query and specific user intent privacy might be achievable from the user side without the involvement of the search engine or other third parties.
Authors: Kato Mivule, Kenneth Hopkinson Full Text
Keywords: Web search privacy; query obfuscation; user profile privacy; user intent obfuscation
Abstract: A new method for detection of ransomware that is present in an infected host during its payload execution is proposed and evaluated. Data streams from on-board sensors present in modern computing systems are monitored and appropriate criteria are used that enable the sensor data to effectively detect the presence of ransomware infections. Encryption detection depends upon the use of small yet distinguishable changes in the physical state of a system as reported through on-board sensor readings. A feature vector is formulated consisting of various sensor output that is coupled with a detection criteria for the binary states of “ransomware present” versus “normal operation”. Preliminary experimental results indicate that ransomware is detected with an overall accuracy in excess of 95% and with a corresponding false positive rates of less than 6% for four different types of encryption methods over two candidate systems with different operating systems. An advantage of this approach is that previously unknown or “zero-day” versions of ransomware are vulnerable to our detection method since no prior knowledge of the malware, such as a data signature, is required for our method to be deployed and used.
Authors: Michael A. Taylor, Kaitlin N. Smith, Mitchell A. Thornton Full Text
Keywords: Ransomware detection; physical sensor side channel; feature vector; encryption
Abstract: To meet the growing need of robust and secure identity verification systems, a new biometric based on neural representations of synergistic hand grasps is proposed here. In this preliminary study five subjects were asked to perform six synergistic hand grasps that are shared most often in common activities of daily living. Their scalp electroencephalographic (EEG) signals were analyzed using 20 scalp electrodes. In our previous work, we found that hand kinematics of these synergistic grasps showed potential as a biometric. In the current work, we asked if the neural representations of these synergistic grasps can provide a unique signature to be a biometric. The results show that across 300 entries, the system, in its best configuration, achieved an accuracy of 92.2% and an EER of ~4.7% when tasked with identifying these five individuals. The implications of these preliminary results and applications in the near future are discussed. We believe that this study could lead to the development of a novel biometric as a potential future technology.
Authors: Vrajeshri Patel, Martin Burns, Ionut Florescu, Rajaratnam Chandramouli, Ramana Vinjamuri Full Text
Keywords: Biometrics; hand synergies; quadratic discriminant classifier; electroencephalography (EEG); feature extraction
Abstract: During the decade, interactions among people have gradually changed as a result of the popularity, availability and accessibility of social networking sites (SNSs). SNSs enhance our lives in terms of relaxation, knowledge, and communication. On the other hand, the information security and privacy of SNS users have been threatened with most users not aware of this fact. The rate of cyber-attack committed via SNS is dramatically high. Finding a solution to provide better security for social network users has become a major challenge. This review is conducted with the objective to collect and investigate all credible and effective researches that have studied security problems and solutions on SNSs. We aim to extract and discuss the prominent security features and techniques in the selected research articles to provide researchers and practitioners with a concise collection of the security solutions. In this review, we conduct a secondary study by accessing the previous studies devoted to security threats of SNSs and new security techniques to protect them from attacks. We apply the standard guidelines of systematic literature review by working thoroughly on 84 previous studies including journal papers and conference proceedings published in high impact journals. The results show that 2013 is the peak period in which security problems on SNSs obtained attentions from researchers and 23 significant security problems in SNSs were discovered. Facebook and Twitter are the two SNSs mostly referred to by researchers regarding security problems. We found that people (users) and SNSs themselves are the two main causes for today’s security and privacy issues on SNSs. In conclusion, the security and privacy issues on SNSs are still an unsolved problem and there is as yet no solid and complete solution for absolutely removing those issues on SNSs.
Authors: Azah Anir Norman, Maw Maw, Suraya Hamid, Suraya Ika Tamrin Full Text
Keywords: Social networking sites (SNS); security; privacy; security techniques; systematic literature review (SLR)
Abstract: Extensive research has been conducted in the technical side by managing privacy using mechanisms such as encryption, passwords, etc. However, the core issues of privacy are not addressed. This is particularly evident when photos and videos are shared via social media. The main problem is that the actual meaning of privacy is difficult to define. Though there are definitions of privacy and acts defined to protect it but there is no clear consensus as to what ‘privacy’ actually means. It is quite often challenging to manage as it is an ill-defined concept. This research is motivated by the question of what privacy means with in relation to photos and videos and methods have been used to obtain a crowd truth and arrive at a general consensus. The outcome of this research is to develop a conceptual framework of privacy, particularly for sharing photos and video content when using social media.
Authors: Srinivas Madhisetty, Mary-Anne Williams Full Text
Keywords: Privacy; photos and videos conceptual framework
Abstract: Two-factor authentication requires two pieces of independent evidence, mostly one based on possession and the sec-ond based on knowledge. The major drawback of these methods are usability and the costs to procure and (re)place the hardware token. The SwissPass is a contactless crypto card mainly used to inspect travel tickets (GA and Half-Fare travel cards) by the Swiss federal railways. This paper presents an authentication protocol using the widely spread SwissPass that allows to log in into web and mobile applications in a secure and intuitive way via smart phone. The protocols are further developed to create the SwissPass Authenticator providing federated authentication on the smart phone.
Authors: Annett Laube, Reto Koenig Full Text
Keywords: Authentication; mobile applications; smart cards; privacy
Abstract: Supervisory Control and Data Acquisition (SCADA) systems form a critical component to industries such as national power grids, manufacturing automation, nuclear power production and more. By interacting with control machines and providing real-time support to monitor, gather, and record data, SCADA systems show major impact in industrial environments. Along with the uncountable benefits of SCADA systems, inconceivable risks have raised. Moreover, SCADA operators, production staff and sometimes systems experts have no or little knowledge when applying security due diligence. In this paper, we systematically review SCADA security based on different aspects (i.e. SCADA components, vulnerability, severity, impact, etc.). Our goal is to provide an all-inclusive reference for future SCADA users and researchers. We also use a time-based heuristic approach to evaluate vulnerabilities and show the importance of the evaluation. We aim to establish a fundamental level of security due diligence to ensure SCADA risks are well-comprehended and managed.
Authors: Parves Kamal, Abdullah Abuhussein, Sajjan Shiva Full Text
Keywords: Supervisory Control and Data Acquisition (SCADA) security; critical infrastructure security; SCADA; risk assessment; vulnerability scoring
Abstract: True nano RFID/Computers (NRs) will represent a major change in the way many things are done. This technology will truly enable fulfillment of the promise of the Internet of Things (IoT). NR’s could have as transformative impact on the world as the Internet and personal computer have. They could provide the foundation for realizing the vision of the “Internet of Things” or “Internet of Everything” by “wiring up” the world, from inanimate objects to living organisms. From the environment of the individual to the global ecosystem, we could “know” and interact with our environment on real-time basis down to a nanoscale, such as human cells and molecular structures of objects. This would enable humans to have a far greater understanding and awareness of and control over the world around them – viewing our world with much “higher resolution”. The benefits could range from huge gains in improving human health while reducing costs to far greater efficiencies in the use of natural resources to enhance prosperity and environmental sustainability. These nanoscale devices can do much more than tracking. They can be embedded in any material, and thus serve as a platform to both acquire data from the material and to send information or instructions to the material. Beyond two-way data transmission, NRs would not just be tracking devices but complete computing and data acquisition systems. Through the application of ultra-miniaturization, distributive computing and nano antennas, the acquisition and processing of analytical data can become massively parallel.
Authors: Mario Cardullo, Robert Meagley Full Text
Keywords: Nanoscale; Internet of Things (IoT); Internet of Everything; RFID; nano-computers; nano RFID/Computer (NR)
Abstract: The Binary Coded Decimal (BCD) being the more accurate and human-readable representation with ease of conver-sion, is prevailing in the computing and electronic communication. In this paper, a tree-structured parallel BCD addition algorithm is proposed with the reduced time complexity O(N(log2 b) + (N - 1)), where N = number of digits and b = number of bits in a digit. BCD adder is more effective with a LUT (Look-Up Table)-based design, due to FPGA (Field Programmable Gate Array) technology’s enumerable benefits and applications. A size-minimal and depth-minimal LUT-based BCD adder circuit construction is the main contribution of this paper. The proposed parallel BCD adder gains a radical achievement compared to the existing best-known LUT-based BCD adders. The proposed BCD Adder provides prominent better performance with 20.0%reduction in area and 41.32% reduction in delay for the post-layout simulation. Since the proposed circuit is improved both in area and delay parameter, it is 53.06% efficient in terms of area-delay product compared to the best known existing BCD adder, which is surely a significant achievement.
Authors: Zarrin Tasnim Sworna, Mubin Ul Haque, Hafiz Md. Hasan Babu, Lafifa Jamal Full Text
Keywords: Adder; Binary Coded Decimal (BCD); Field Pro-grammable Gate Array (FPGA); Look-Up Table (LUT); correction
Abstract: Adders are the basic building blocks in the digital systems. Addition is an indispensable operation in digital, analog and control systems. Performance optimization of a digital adder relies on the parameters such as power, speed and area. Much research has been going on in optimizing the delay and power dissipation of adders. Carry-lookahead adder (CLA) is considered one of the fastest digital adders. It emerges from the concept of computing all incoming carriers in parallel. This paper introduces various design implementations using CMOS transistors, producing unique logic of CLA adder. Each design implementation is analyzed by assessing the power dissipation and the delay at every possible state by transistor sizing. Simulations have been performed on Tanner EDA tools based on 250nm technology at 2.5V supply voltage. Previous work on CLA has been examined, and the 8-bit design of CLA and its delay and power dissipation has been evaluated.
Authors: Naga Spandana Muppaneni, Steve C.Chiu Full Text
Keywords: Carry-lookahead adder (CLA); performance optimization; simulations; transistor sizing; power dissipation; Tanner EDA tools
Abstract: Force Myography (FMG) is a method of tracking movement and functional activity that is based on the volumetric changes that occur in a limb during muscle contraction. There are several advantages of FMG over other myographic modalities that support its implementation in rehabilitative and assistive technology to track upper extremity movement during activities of daily living. The aim of the current work is to explore the stability of FMG sensors during non-static upper extremity activities. Twenty-one participants with varying age and gender were recruited to perform a set of tasks while wearing a custom FMG band. The participants were required to move between two extremes of range of motion (wrist flexion/extension and forearm pronation/supination) or between two extremes of grasp force (squeeze and relax). FMG presented low variability (<6%) and demonstrated little to no drift with ongoing task duration (Spearman’s |R| < 0.3). FMG variability did not present any relationship to differences in anthropometry or grip strength (Spearman’s |R| < 0.3), suggesting that FMG wearers will present a stable FMG signal despite differing musculoskeletal characteristics. Finally, variability in FMG presented no significant relationship between user variables and the testing accuracies of machine learning models trained on FMG data. The results of this study demonstrate the stability of FMG signals during non-stationary tasks and support the potential of implementing FMG into user-machine interface technology.
Authors: Mona Lisa Delva, Carlo Menon Full Text
Keywords: Activities of daily living; age-related rehabilitation; assistive technology; biomedical devices; human factors; independent living; prosthetic control; sensors/sensor application; force myography
Abstract: A conventional design has an insulator layer for every crossbar layer stacked. Methodology for alternatively selecting memristor layers and methodology for proper operation are proposed. It would help increasing the vertical density of stacked memristor crossbar arrays. It is the maximum possible memory density design for crossbar stacks. While still suffering a few shortcomings, concurrent access in particular, the new design proves itself as an interesting design alternative because of increase in memory density. For a 2 nm insulator thickness in conventional design, at least 27.50 percentage increase in vertical crossbar density is expected in the proposed design. Alternative designs and approaches have also been proposed to address the shortcomings.
Authors: Selvakumaran Vadivelmurugan Full Text
Keywords: Random access storage; very-large-scale integration (VLSI); analog circuits
Abstract: Today’s smartphones have developed into advanced computer systems with enormous photo and video capabilities, larger touch-screens, ubiquitous internet access, and powerful global positional location services. There are numerous applications available nowadays that provide a wide range of services to smartphones’ users in almost every area of daily life, such as communication, news, entertainment, maps, and education. The Student Information System, or SIS, is a service at the desktop computer-level provided by institutions of higher education to the students and faculty using which the students can gain access to their transcripts, get their semester timetable, register or drop courses, find out about the courses’ final examination timetable, or get general academic information. The faculty, using SIS, can access information pertaining to the students registered in their course, get detailed information about their advisees, or audit degrees when their advisees are close to graduation. In this paper, we have described the design of an SIS application developed for Android operating system so that, instead of using a computer system, both students and faculty can gain access to the academic information on-the-go through their smartphones on which this application has been installed. Initial tests have indicated positive experience encountered by both students and faculty upon using this application and efforts are underway to develop similar application for Apple’s iOS as well.
Authors: Tariq Jamil, Iftaquaruddin Mohammed Full Text
Keywords: Smartphones; mobile; android; SIS
Abstract: Ample amount of time spent in locating, finding and procuring desired items in wish list of our daily life necessities. Usually the list of desirable items is prepared when the person is not near to desired items. It would be convenient to have an assistance application that lure people on the go, to procure item of desire. This must require a two tier information sharing system, one at procurer and other at vendor. Procurer add item to the wish list and vendor add offer that must comply with transaction, valid and existence time with respect to determined geo-location. Therefore, a system with mobile application named ‘Zambeel’ is presented that enable procurer to locate vendors for their required things. The idea is to produce geographically aware application for mobile devices that would automatically notify users to get location on map even on the go.
Authors: Kashif Rizwan, Nadeem Mahmood, S A K Bari, Zain Abbas, Adnan Nadeem, Ahmad Waqas Full Text
Keywords: Spatio-temporal; geo-fence; smart personal assistant; push notification; IoT; cloud
Abstract: Long Term Evolution (LTE) is the current leading data transfer technology in the cellular industry and is used daily by people nationwide on multiple carriers to access their on-line services and networked applications. However, the service quality and access to this technology differ among cellular service providers due to the distance from the cellular towers and the number of users on the network at any given point in time. While it may be easy for a consumer to find the service provider with the best coverage nationwide, it is not as easy to find which has the strongest cellular signal in their favorite and frequented areas. To provide the residents of a city or town with an accurate representation of the cellular service coverage for different carriers in their area, the authors set out to test a number of cost-effective, open-source based Software Defined Radio (SDR) drive test systems, select one and calibrate it against an industry-standard LTE coverage survey instrument for a number of LTE frequency bands. This paper describes a cost-effective software defined radio (SDR) drive test system (to be known in this paper as LTESub) that we developed and adopted in conducting Radio Frequency (RF) drive tests around a specific geographical area in their school town. To provide both an accurate representation as well as a consumer-friendly way to test for LTE signal strength, the authors selected the RTL-2832U NESDR Nano USB, Delmore Earthmate LT-20 GPS USB dongles and RTL-SDR Scanner Software. They calibrated the newly created LTESub drive test system against the Anritsu LinkMaster ML87110A Drive Test multi-band receiver. The use of the LTESub system led to the successful development of a set of highly accurate heat maps at the 866.3 MHz and 751 MHz bands that can be used by potential cellular service subscribers to select the right frequency bands for their area and to map the same to the cellular service provider by consulting publicly available online tools.
Authors: Emil H. Salib, Andrew Funkhouser Full Text
Keywords: Software Defined Radio (SDR); Long Term Evolu-tion (LTE); Radio Frequency (RF); drive test; heat map; LTESub
Abstract: Many efforts have been made to come out with some propagation models that should be suitable to the area of interest. Therefore, this raises the issue of the environment and its characteristics. This perspective has led to environment’s classification with some specific models, which in turn are embedded in planning and optimization software. Software is very expensive in addition to the calibration of the embedded prediction models may require an extra cost for the network operator(s). This vision has prompted the work developed in this paper. This paper presents a calibration of the European Cooperation in the field of Scientific and Technical Research (COST) COST-231 model and that of the Standard University Interim model to field data in some selected environment study of Togo. The work not only proposed some optimized fitting parameters suitable for the environmental conditions in Togo, but provides consistent statistical parameters for these models for future electromagnetic applications but though cellular operating frequency has been considered in this work.
Authors: Adekunlé A. Salami, Ayité S. A. Ajavon, Koffi A. Dotche, Koffi Sa-Bedja Full Text
Keywords: Propagation models tuning; COST-231; SUI-Model; received signals; drive testing
Abstract: Recently, numerous new mobile and web-based application frameworks have been released and adopted in both communities of software development industry and research. For example, there are currently the KnockoutJS, BackboneJS and ReactJS frameworks competing together with the different (entirely different) versions of ‘Angular’ frameworks. While some of these new frameworks are more popular than others, some are specialised in certain types of applications, and others have spe-cific advanced features or outstanding capabilities that set them above others. Moreover, with the increase usage and demand of mobile applications, the need for cross-platform frameworks has significantly increased as well. In this paper, we discuss the different criteria which identifies the strengths and weaknesses of using each framework in developing mobile and web-applications. We highlight and discuss 12 different features of latest application frameworks as ‘points of comparison’. Then we compare 5 of the trending frameworks (KnockoutJS, BackboneJS, AngularJS, React and Angular2) in a thorough analysis based on our earlier defined points of comparisons i.e. features. Finally, we focus more deeply on the newly released Angular2 framework showing the eminent capabilities and values added over different trending frameworks and over its own earlier versions. Overall, our comparative analysis results in a few interesting findings regarding different current frameworks, leaving us to believe that a new generation might soon emerge from the exponential path of MVC, MV*/MVW and MVVM.
Authors: Mohamed Sultan Full Text
Keywords: Angular; mobile and web-based applications; front-end; JavaScript frameworks
Abstract: In the recent years, wireless communication integration with mm-Wave spectrum makes fifth generation (5G) gained tremendous research interest, as a result of scarce resources. This challenged the design paradigm of the previous fourth generation (4G) radio access technology. As the key to future 5G systems, Multiple Input Multiple Output (MIMO) on the other hand, offers promising performance enhancement with effective spectrum utilization although, leads to increase in the cost of deploying the system as the number of antenna increases thus, large simulation assessment prevails in the literature which requires further practical implementations, assessment, and validation in real time. This article present a MIMO testBED experimental design, implementation, and evaluation of the system bit error rate (BER) performance with channel capacity using spatial diversity. The system’s prototyping utilizes Universal Software Radio Peripheral hardware (USRP NI-2922) together with LabVIEW software toolkits, results obtained shows MIMO system feature spatial diversity to improve BER, as well as system channel capacity.
Authors: Aliyu Buba Abdullahi, Akram Hammoudeh, Rafael F. S. Caldeirinha Full Text
Keywords: Universal Software Radio Peripheral (USRP); Multiple Input Multiple Output (MIMO); testBED; Spatial Diversity (SD); Space Time Block Coding (STBC); bit error rate (BER)
Abstract: Wireless networking is the latest trend. In all unpredictable and changing environments this networking has enormous applications. Business organizations and all users of various fields choose wireless. This is because of the reason that it allows flex¬ibility of location. The attribute to support this is mobility, portability, and ease of installation. In mobile ad-hoc network, nodes are almost continuously moving from one place of location to another. Thus, MANET topology can change often and unpredictably. Mobility of nodes is one of the major issues of concern in mobile Ad-hoc network because it causes a link failure. In this paper a new submission has been suggested that will help mobile nodes to maintain routes to destination and that too with stable route selection. This process will make recovery phase very efficient and fast. The performance of the proposed routing protocol named as Selfheal Stable Routing protocol (SSRP) is evaluated using performance metrics like Packet Delivery Ratio, Throughput and End to End Delay. The study is based on simulation runs adopting CBR traffic pattern taking care of node failure scenarios.
Authors: C.Jinshong Hwang, Rosy Pawar, Ashwani Kush Full Text
Keywords: MANET; AODV; security; stable; routing
Abstract: It has become common for many devices to share bottleneck links when users watch streaming video. When the DASH standard is used for adaptive video streaming over HTTP it has been found that good Quality of Experience (QoE) among video players become a critical issue. Markov Decision Processes (MDP) is one attempt at optimizing the streaming process by adopting a policy for maximizing particular QoE parameters. This paper proposes a novel approach called SHARE that uses a state-array representation consisting of a quality measurement called Data Rate Ratio (DRR) from each player in the network. A third-party network device collects the DRR values of players. Further it uses a MDP based on discretized DRRs to generate policies for better bitrate selection at runtime, using a unique reward function. A three player model is presented. Based on comparisons with other methods, the result shows that players adopting these policies obtains good QoE across various metrics, such as, bandwidth utilization, unfairness, re-buffering ratios, instability and average quality, with minimal possible trade-offs.
Authors: Koffka Khan, Wayne Goodridge Full Text
Keywords: Bottleneck links; DASH; Markov Decision Process; adaptive streaming; Quality of Experience (QoE)
Abstract: Motivated by large capacity gains in multiple an-tenna systems when ideal channel state information (CSI) is available at both receiver and transmitter and quadrature am-plitude modulation (QAM) is applied, we examine the achievable rates of Rayleigh fading channel measurement based optimization techniques. We consider complex-valued noise Gaussian distri-bution and try to determine the optimal input distribution of fixed signalling points. By using Hermite polynomials and under even-moment constraint, the simulation results show that the information rate is achieved with unique and optimal input distribution. It is also shown that the computational complexity can be reduced by factorizing the optimal distribution into the product of symmetrical distributions.
Authors: Alaa Hasan, Ian D. Marsland Full Text
Keywords: Polar codes; MIMO fading channel; Hermite poly-nomials; channel capacity
Abstract: The need for efficient radio channel quality measurement to support planning, operations and management of data communication networks has increased nowadays. An important parameter for measuring data communication quality of a radio network is channel throughput. In this research work, the impact of end-user location and radio propagation channel environmental parameters on channel throughput performance has been experimentally investigated in an operational 3G networks, with upgraded HSDPA technology. Firstly, results show that near-far effect have enormous impact on channel throughputs especially as the end-user move towards the cell edges. This implies that the packet drop rate on the packet data communication links increases as the user move towards the cell edges. Secondly, it has been shown how the end-user data throughput performance drops as propagation loss exponent increases. This implies that data communication quality in HSDPA mobile broadband is environment dependent. Hence, to provide a good end-user experience, the influence of different radio propagation environment on mobile data communication quality must be considered in the design and deployment of cellular networks.
Authors: Anthony Osaigbovo Igbinovia, Oyeyemi Akinpelumi Chris, Joseph Isabona Full Text
Keywords: Mobile broadband; radio propagation channel; radio propagation environment; end-user location; channel throughput
Abstract: Wireless Sensor nodes consist of communication devices, physical devices (environmental Sensors), processing unit, memory and radio. Optimizing the power consumed by the sensor nodes is always a challenge. The power consumed during communication is high. Therefore, optimizing the power and energy during communication is really necessary. This paper addresses this issue by implementing stochastic power model of wireless sensor nodes to handle any Mission Critical Systems (MCS). Mission Critical Systems are systems that handle tasks and accomplish the real-time deadlines. If a deadline is not met, something catastrophe may occur and the sensor nodes sleep during critical times which will lead to an unstable system. So instead of going to sleep state, the state changes to idle state to handle critical tasks. In this paper, the motes are characterized using Semi Markov Decision Process (SMDP). Various policies were framed for Non-Critical and Mission Critical Systems. Mission Critical Systems uses nodes that meet the deadlines, thereby, optimizing the power and energy used. Our experimental setup improves the energy efficiency of MCS by at least 25%. The model is validated using Crossbow Sensor motes. Also, the model selects the action in the node in order to suggest the best policy for better energy optimization. The SMDP modeling is solved by Dynamic programming using the value iteration function with discounted rewards. Our results have shown that the nodes can go from active to sleep state for non-critical applications and active to semi-sleep state for mission critical application. Our performance results have shown that 25% more power saving is achieved.
Authors: T. S. Pradeepkumar, Mohammad S. Obaidat, P. Venkata Krishna, V. Saritha Full Text
Keywords: Wireless sensor networks; simulation; Semi Markov Decision Process (SMDP); Markov process; dynamic programming
Abstract: The major challenges in a call centre with respect to customer satisfaction has to do with waiting time on the queue for long period of time before they are attended to. Beyond the problem of queue is the nature of the service itself, the effective resolution of the customer issues. The major challenge there is to determine a routing rule that can reduce waiting time and as well enhance call resolution rates. In this study, we conducted simulation analysis using Java simulation library on seven routing rules, four for waiting time and three on call resolution (CR) rate oriented routing rules, in a bid to determine the optimal rule. The data used for the simulation was collected from a call centre of a telecommunications outfit in Nigeria. The result from the simulation gave the optimal rules for waiting time and CR rate routing rule. A hybrid framework was developed from the outcome of the simulation result. The proposed routing rule will be able to achieve low wait-time and enhanced call resolution, this will improve and optimize call centre operations as well as increase customer’s satisfaction and brand loyalty.
Authors: Mughele Ese Sophia, Stella Chiemeke, Konyeha Susan, Kingsley Ukaoha, Dorothy Akpon-Ebiyomare Full Text
Keywords: Optimization; hybrid rule; routing rule; queue; call centre; call resolution
Abstract: Data Center have decisive role in online corporate world. At present these data centers are not only house of servers, switches and routers, but to provide speedy services to vendors and uninterrupted network connectivity to client’s websites. The importance of managing data center traffic, forecasting tech-niques for resource utilization are more challenging in effective data centers. This paper focuses to observe and analyses the live traffic in real-world data center networks, apply forecasting techniques for traffic optimization and proper resource utilization in data centers. We propose forecasting model for data centers to predict and estimate proper bandwidth utilization in real-world situations. Our model can be useful and identify the upcoming network trend, bandwidth demand and the essential growth to predict the futuristic assessment. The paper is based on day to day network traffic engineering and observation through exponential smoothing method of time series, the approach optimizes the upcoming network traffic for data centers.
Authors: Samar Raza Talpur Full Text
Keywords: Network engineering; traffic optimization; exponen-tial smoothing method; forecasting techniques
Abstract: Advancement in network sensor technology has contributed a lot towards a better society and has opened new avenues of research. Underwater Wireless Sensor Networks (UWSN) attracts a lot of attention of the researchers, due to its military applications, environmental monitoring and prediction of natural disaster. Vibrant underwater weather conditions and node movement make designing of an efficient routing protocol for underwater wireless sensor network a challenging task. This paper represents a comprehensive survey and analysis of existing routing protocols for underwater wireless sensor networks. The main contribution of this paper includes classification of the existing routing techniques based on the routing mechanisms. It presents comparison and analysis of the existing routing techniques based on various important features and highlights the major issues that are the obstacles for designing of an efficient routing protocol for UWSN.
Authors: Samera Batool, Muazzam A. Khan Khattak, Nazar Abbas Saqib, Saad Rehman Full Text
Keywords: Underwater Wireless Sensor Networks; Wireless Sensor Networks
Abstract: Due to the huge amounts of online learning materials, e-learning environments are becoming very popular as means of delivering lectures. One of the most common e-learning challenges is how to recommend quality learning materials to the students. Personalized e-learning recommender systems help to reduce information overload, which tailor learning material to meet individual student’s learning needs. This research focuses on using various recommendation and data mining techniques for personalized learning in e-learning environment.
Authors: Hayder Murad, Linda Yang Full Text
Keywords: eLearning; recommender system; data mining
Abstract: Precision agriculture involves observing the crop production process and applying appropriate actions to improve production efficiency. In this paper, a smart vision system is developed to monitor specialty crops which include fruits and vegetables. The smart vision system is composed of the image acquisition module and the image processing element. The image acquisition module is a modified point and shoot camera that could detect both visible and near-infrared wavebands, while the image processing element takes the multispectral image as an input and processes the images using a customized image processing algorithm for crop assessment. The smart vision system was tested using an experimental apple orchard, a commercial onion field, and a peach orchard. Results showed that the smart vision system was able to differentiate different watering input in the apple orchard, recognize the blossoms in the peach orchard, and detect the variation in the onion field.
Authors: Duke M Bulanon, Esmaeil Fallahi Full Text
Keywords: Digital image processing; machine vision; remote sensing; specialty crops
Abstract: This paper reviews and introduces a modified CPD-OFDM (Orthogonal Frequency Division Multiplexing with cross polarization diversity structure) system improving the system performance degradation due to the frequency offset. In the conventional OFDM system, the frequency offset causes the ICI (inter-channel interference) and degrades signal-to-noise ratio in the receiving end. The cross polarization diversity structure composed with 2-pairs cross polarized circular antenna in each transceiver has the characteristic that it can remarkably remove the odd time reflected waves in each receiving end. Because of this receiving effect for removing the delayed multipath waves, the cross circular polarization diversity structure can reduce the time delay spread and ICI. Therefore, the modified CPD-OFDM system can improve the system performance as well as spectrum efficiency. In order to investigate the performance degradation of CPD-OFDM system due to the frequency offset, computer simulation and theoretical analysis were conducted. From the simulation results, it is clearly seen that the CPD-OFDM system performance has shown 1~3 [dB] improvement compared to that of the conventional OFDM system.
Authors: Deock-Ho Ha, Kyu-il Han Full Text
Keywords: Modified CPD-OFDM; diversity structure; divides sub-channel; minimizing frequency offset; improving performance
Abstract: In this paper, a SCADA (supervisor control and data acquisition) system HMI (human machine interface) software was been implemented based on a driver system which consisted by double generators and double motors. This software was been implemented by NI Labview based on the CAN bus. Meanwhile the dynamical monitoring and alarm are implemented. The data collected for the HMI include the collecting data of bus voltage and current; the speed; the temperature on IGBT, bearing and winding. The generator and the motor controller running status and the fault status are also been sent to the HMI. The command sent to the double generators and the double motors are including start/stop, speed, bus voltage, PID parameters to DSP (speed circle Kp and Ki, Kp and Ki in D axis for current circle), etc. The main works focus on the design of CAN communication protocol; multichannel CAN bus control implemented in Labview; bit data and different byte data combination and unpack. In practical engineering application, this system can absolutely realize automatic supervision and control process efficiently and reliably.
Authors: Bei CHEN, Wenlun CAO, Yuyao HE Full Text
Keywords: Supervisor control and data acquisition (SCADA); human machine interface (HMI); testing system; Labview; Can2.0B
Abstract: Every day, millions of documents in the form of articles, tweets, and blog posts are generated by Internet users. Many of these documents are not annotated with labels making the task of retrieving similar documents non-trivial. The process of grouping similar unannotated documents together is called document clustering. When properly done, a cluster should only contain documents that are related to each other in some manner. In this paper we propose a clustering technique based on Gaussian word embeddings which we illustrate using word2gauss. We demonstrate that the method produces coherent clusters that approach the performance of K-means on Purity and Entropy scores while achieving higher recall as measured by Inverse Purity.
Authors: Inzamam Rahaman, Patrick Hosein Full Text
Keywords: Document clustering; word embeddings; informa-tion retrieval
Abstract: This project is a microcontroller based system to prevent children from being left unattended in a hot vehicle. Some products claiming to help prevent vehicular heatstroke are available, but they are marketed to end users rather than to automotive OEMs. The undergraduate engineering students created a system to raise the parent’s awareness and trigger multiple alarms to prevent this situation from occurring. The circuit reads the voltage from the car battery/alternator to see if it is running and uses pressure sensors to detect if a child is in the car-seat. The system sounds an alarm through a local speaker and uses a Bluetooth connection to a smartphone to give a secondary alert in case the parent leaves the child unattended. The Bluetooth connection is not limited to phones, and could be easily integrated directly into the car’s onboard computer with the help of OEMs.
Authors: Dylan Howell, Sara Talley, Samwoo Seong Full Text
Keywords: Vehicular heatstroke; Bluetooth; microcontroller
Abstract: Content-Based Image Retrieval has been a fast advancing area since the 1990s decade, with the Internet growth and the technology available that provides an easy access to acquire images. Hence, image data requires to be organized so that image database queries can result as fast as possible, even though the many possible topics are available now a days. In this paper, we introduce a novel technique based on global image features by interest point detection and using graph theory, particularly Delaunay triangulations to obtain a graph that can be measured for comparison. The technique shows promising results and can be regarded as flexible in the sense that parts can be adapted or upgraded to achieve better performance.
Authors: Daniel Valdes-Amaro, Eduardo Lopez-Prieto, Arturo Olvera-Lopez, Carlos Guillen-Galvan Full Text
Keywords: Content-based image retrieval; interest point detec-tion algorithms; graph theory; delaunay triangulations
Abstract: Disaster relief with satellite bases Synthetic Aperture Radar (SAR) is conducted. SAR can be used for disaster relief. Interferometric SAR (IN-SAR) allows elevation estimation. Therefore, it is applicable to estimate seismic changes, elevation changes due to earthquake, land slide, slope collapse, etc. Experiments are conducted for the earthquake disaster which occurs in Kumamoto, Japan and for the river flooding due to typhoon heavy rain which occurs in Oita, Japan. Sentinel-1 of SAR imagery data shows an enormous potential of disaster relief.
Authors: Shogo Kajiki, Hiroshi Okumura, Kohei Arai Full Text
Keywords: Synthetic Aperture Radar (SAR); interferometry; disaster relief; earthquake; water flooding
Abstract: In the high-tech and intensively developing sectors of the economy, including the IT industry, the composition of the professional skills and competencies necessary for successful work of specialists is rapidly changing. This is happening due to the dynamics of the composition and capabilities of key and applied technologies. Experts working in the IT industry have to constantly monitor changes in approaches of building architectures and the computational/software systems themselves. Under such conditions, the information-computing service is important, allowing in an automated mode to identify trends in the development of technologies and their corresponding professional skills. Taking into account these trends, it is possible to specifically improve the skills of specialists, adjust the curriculum of universities, centers for professional advancement, etc. As the basis for identifying trends in technology development, the authors of the article proposed to use information from international patent databases. The reason is that large IT companies execute many patents before releasing a new high-tech solution (hardware or software) to the market. In the future the use of these patents in the output leads to changes in the composition of the professional skills of specialists in demand in the labor market. The article analyzes the current demand for IT professionals by employers in the labor market on request in the relevant information systems. The possibilities of using professional social networking sites for recruiting personnel by organizations, searching for jobs by specialists are considered. A prototype of the predictive learning service for the professional social networking site was developed. It provides the following: monitoring of the demand for professional skills in the labor market; the analysis of patents on technologies that are the basis for each of the existing and projected professional skills. The developed service will allow determining the levels of demand for professional skills; actualizing and improving the job seekers’ professional skills; organizing the professional social networking sites; to form personal programs of training on promising technologies.
Authors: Evgeny Nikulchev, Dmitry Ilin, Gregory Bubnov, Egor Mateshuk Full Text
Keywords: Online social networks; social networking sites; technology life cycle; predictive learning; patent activity analysis; professional skills; topic detection; LinkedIn; ResearchGate; labor market; scalable services; decision making support; Node.JS
Abstract: 360 degrees surround photography or photospheres have taken the world by storm as the new media for content creation providing viewers rich, immersive experience compared to conventional photography. With the emergence of Virtual Reality as a mainstream trend, the 360 degree photography is increasingly important in offering a practical approach for the general public to capture virtual reality ready content from their mobile phones without explicit tool support or knowledge. Even though the amount of 360 degree surround content being uploaded to the Internet continues to grow, there is no proper way to index them or to process them for further information. This is because of the difficulty in image processing the photospheres due to the distorted nature of objects embedded. This challenge lies in the way 360 degree panoramic photospheres are saved. This paper presents a unique, and innovative technique named Photosphere to Cognition Engine (P2CE), which allows cognitive analysis on 360 degree surround photos using existing image cognitive analysis algorithms and APIs designed for conventional photos. We have optimized the system using a wide variety of indoor and outdoor samples and extensive evaluation approaches. On average, P2CE provides up-to 100% growth in accuracy on image cognitive analysis of photospheres over direct use of conventional non-photosphere based Image Cognition Systems.
Authors: Madhawa Vidanapathirana, Lakmal Buddika Meegahapola, Indika Perera Full Text
Keywords: 360 photography; image processing; photosphere; cognition; 360 degree surround photography
Abstract: According to recent studies, billions of objects are expected to be connected wirelessly by 2020. The Low Power Wide Area (LPWA) networks are already playing an important role in connecting billions of new devices making up the IoT. Long Range (LoRa) and Narrowband-IoT (NB-IoT) are currently the two leading technologies triggering considerable research interest. This paper focuses on providing a comprehensive and comparative survey study on the current research and industrial states of NB-IoT and LoRa in terms of their power efficiency, their capacity, quality of service (QoS), reliability and range of coverage. The outcome of this research survey demonstrates that the unlicensed LoRa is more advantageous than the NB-IoT in terms of its energy efficiency, its capacity and cost while the NB-IoT gives benefits in terms of its reliability, resilience to interference, latency and QoS. It is further shown that despite the considerable research and development that has so far been carried out on existing LPWA technologies, there are still challenges to be addressed. This paper therefore proposes potential research future directions to address the identified challenges.
Authors: Munguakonkwa Emmanuel Migabo, Karim Djouani, Anish Kurien, Thomas Olwal Full Text
Keywords: Low Power Wide Area (LPWA); Long Range (LoRa); Narrowband IoT (NB-IoT); comparative; energy efficiency; quality of service (QoS)
Abstract: Effective education system can be evaluated through its Input-Process-Output framework implementation. Quality instruction is one of the input component indicators which includes student engagement as its binding measure. In classroom environment, facial expression are used by teachers to measure the affect state of the class. Incorporating technology in education help students prepare for life-long learning. Emerging technologies like Affective Computing are one of today’s trends to improve quality instruction delivery by analyzing affect states of the students. This paper proposed a system of classifying student engagement using facial features. Conceptual framework of the study includes multiple face detection, facial action unit extraction and a classification model. Different algorithms were tested and compared to best configure the proposed predictive classification model. Varied test datasets were also used during experiments to gauge the accuracy and overall performance of this class engagement analyzer prototype.
Authors: Roy Manseras, Thelma Palaoag, Alvin Malicdem Full Text
Keywords: Class engagement; affective computing; facial fea-ture extraction; action units
Abstract: Service providers leverage cloud ecosystems and cloud e-marketplaces to increase the business value of their services to reach a wider range of service users. The operations of commercial e-marketplaces can be further enhanced by enabling service composition mechanisms that allow automatic aggregation of atomic services into composite offerings that meets complex user requirements. Existing approaches of cloud service selection are yet to achieve this. Currently, users are constrained to make choices only from a set of predefined atomic services, or at best, manually configure their desirable features and QoS requirements in order to realize their complex requirements given that they have deep knowledge of the service domain. In this paper, a constraint-based approach for service composition and selection to address this problem was proposed. The proposed approach applies constraint-based automated reasoning on feature models to formally guide the aggregation of atomic services to offer composite services in order to satisfy complex requirements with minimal user involvement. The plausibility of the proposed approach is demonstrated via an illustrative customer relationship management (CRM) service ecosystem. The study offers a credible way to replicate the kind of user experience that is currently available on e-commerce platforms in cloud service e-marketplaces.
Authors: Azubuike Ezenwoke, Olawande Daramola, Matthew Adigun Full Text
Keywords: cloud computing; ecosystem; e-marketplace; feature model; constraint programming
Abstract: Natural language processing is an active research filed that will benefit greatly from a shared understanding of its concept terms. Modeling this research domain using an ontological semantic representation that defines and links its terms will definitely benefit the research community. This paper aims to present an ontology model for NLP. This ontological model will serve as a tool in analyzing and understanding the knowledge of the domain and to help practitioners in extracting and aggregating information and then generating an explicit formal and reusable representation of the NLP concepts along with their interrelationships.
Authors: Auhood Abdullah Alfaries, Randa Hamad Aljably, Muna Saleh Al-Razgan Full Text
Keywords: NLP; ontologies style; knowledge domain; concepts
Abstract: In this paper, we describe three different approaches for determining or finding a distance map for a binary image. The algorithms that solve such problems are known as Distance Transforms. These algorithms that solve such problems are known as Distance Transformations. These algorithms operate on binary images but can be extended to receive any type of digital image if a conversion algorithm that converts a digital image into a binary digital image is executed prior to executing the Distance Transform algorithm. Therefore, we also examine how to transform any regular digital image into a binary image, that is, into a black and white image. A Distance Transformation algorithm operates on a binary image consisting of featured pixels and non-featured pixels. It outputs a distance map or distance matrix where each cell matches a pixel of the input image and contains a value indicating the distance to the nearest featured pixel. Distance Transforms represent a natural way to blur feature locations geometrically and they allow other image effects like skeletonizing, image matching, object recognition, path planning and navigation. Five test cases are presented and the execution times of the three techniques are compared.
Authors: Rama Prasada Reddy Peddireddy, Sudhanshu Kumar Semwal Full Text
Keywords: City-block; Euclidian distance transformations; image processing; Processing language
Abstract: Decentralized cooperative control schemes are a prime research focus due to their resemblance to biological systems and many advantages over centralized schemes. This paper presents a simulation framework for a decentralized cooperative control scheme for differential drive mobile robots, with focus on formation control and obstacle avoidance. The framework employs a hierarchical three layered model. The highest layer is responsible for defining intermediate waypoints followed by a navigation layer and a trajectory tracking layer. The navigation layer employs virtual and behavioral structures along with artificial potential field functions using non-linear systems theory to generate robot trajectories. Due to non-holonomic nature of the differential drive robots a robust sliding mode controller is employed for trajectory tracking. Simulation results for individual layer and for the integrated platform are presented, for formation control with obstacle avoidance in a practical scenario with reasonable assumptions. Simulation results validate the working of the proposed scheme.
Authors: Fahad Tanveer, Muhammad Bilal Kadri Full Text
Keywords: Decentralized control; cooperative control; formation control; obstacle avoidance; differential drive robots; artificial potential field; virtual structures; behavioral structures; non-holonomic; sliding mode control
Abstract: Data centers are originally composed of servers that deal with different services, and have been developed into rack-mounted computers and blade server systems. At the same time, the traffic between different servers is increasing, and the interconnection network performance between servers has significant influence on the overall performance of the system. Based on the resource pools background in the development of data center recently, this paper proposes a rack-scale interconnection network structure named RSI. According to the features of the interconnection structure, the routing tables are designed with two levels, which can reduce the size of tables and be convenient to realize adaptive routing. Then a low cost adaptive routing called LAR with a threshold to decide the choice of routing is proposed. We come up with a deadlock prevention mechanism and the deadlock can be prevented in LAR with only 2 VCs. In addition, a fault tolerance algorithm is used to deal with potential failures. Compared to current interconnection topologies in a rack, RSI hierarchical topology can support larger scale of nodes with comparable performance. Finally, the evaluation results show that in extreme traffics, LAR achieves about 6 times throughput than minimal routing which performs well in uniform random traffic.
Authors: Mingche Lai, Xiangxi Zou, Shi Xu, Jie Jian, Xingyun Qi, Jiaqing Xu Full Text
Keywords: Rack scale; resource pooling; interconnection; adaptive routing; fault-tolerant
Abstract: New credentialing standards that were issued by the California Commission on Teacher Credentialing require that program administrators maintain a great deal assessment data during field work to include supervision activities, hours, training, and teaching performance expectation evaluations. These data, particularly the evaluation elements, are to be used for candidate assessment and program improvement. The challenge for programs is not only to archive these data points, but also to make the best use of the data as possible. Program leadership must also document that university supervisors and cooperating teachers have received appropriate training prior to conducting visits or hosting students. The Qualtrics Offline App provides a simple way for all stakeholders who are involved with supervising fieldwork experiences can maintain compliance with these new requirements, while also providing meaningful candidate assessment and data-informed program improvement.
Authors: Chris Boosalis, Oddmund R. Myhre Full Text
Keywords: Qualtrics; Qualtrics App; evaluation; assessment; program assessment
Abstract: In the natural world, many species amplify their intellectual abilities by working together in closed-loop systems. Known as Swarm Intelligence (SI), this process has been deeply studied in schools of fish, flocks of birds, and swarms of bees. The present research employs artificial swarming algorithms to create “human swarms” of online users and explores if swarming can amplify the group’s ability to detect deceit. Researchers recruited 168 participants and divided them randomly into five online swarms, each comprised of 30 to 35 members. Working alone and in networked groups, participants were given tasks with evaluating a set of 20 video clips of smiling people. Each video clip depicted either 1) an authentic smile generated in response to a humorous cue; or 2) a deceitful smile generated falsely upon command. Across the population of 168 participants, the average individual incorrectly identified the deceitful smiles in 33% of the trials. When making evaluations as real-time swarms, the error rate dropped to 18% of trials. This large reduction in error rate suggests that by swarming, human groups can significantly amplify their ability to detect deceit in facial expressions. These results also suggest that swarming should be explored for use in amplifying other forms of social intelligence.
Authors: Louis Rosenberg, Chris Hornbostel, Niccolo Pescetelli Full Text
Keywords: Swarm intelligence; artificial swarm intelligence; collective intelligence; human swarming; artificial intelligence
Abstract: TALERUM is an interactive 2nd language learning environment, designed for use throughout the vast and sparsely populated West-Nordic area where Danish is taught as a foreign language (and for historical reasons, not always a popular one). In a town-like environment, the pupil moves between shops, schools, cafes and a home base where the session begins and the pupil is received as an exchange student by the parents of her ‘roomy’. Through user-initiated dialogues with the game characters the pupil learns about her secret mission. Talerum thus uses elements of informal dialogue, game logic and relatively deep semantic analysis to keep the attention of the language student of generation IT. The Talerum software is open code, and the application is portable to other language locales. An English version is in preparation.
Authors: Peter Juel Henrichsen Full Text
Keywords: Computer assisted language learning (CALL); dialogue systems; gaming
Abstract: A new initiative at Texas Tech University uses hybrid education and collaborative learning modules to integrate new designs to solve real world medical challenge problems. Prehospital care encompasses everything from point of injury in the field to the receiving definitive care facility, and comprises 87% of combat fatalities. The environments span rural settings, wilderness, and extreme environments, both civilian and military, and provide a broad opportunity for innovation in technology to optimize outcome. Our target audience is elementary schools and focus is STEM mentoring and critical problem solving. Through rapid prototyping of new concepts that will enhance quality of life and ultimately, accelerates to full functionality and sustainability, in relatively austere environments students gain an appreciation for the scientific process. We will demonstrate our process and a minimum of two devices that incorporate green energy technologies applicable to this interdisciplinary domain.
Authors: Cody Fell, Annette Sobel, Marc Ordonez Full Text
Keywords: Pre-hospital; experiential; additive manufacturing
Abstract: The proliferation of projection mapping and computer vision techniques have made it possible to create a multiplicity of dynamic, illuminated environments that adapt to user intervention. This paper describes a unique system for an illuminated, machine-readable matrix of objects that performs real-time computation and dynamic projection-mapping. Illuminated, tangible-interactive matrices have immediate applications as collaborative computation tools for users who want to leverage matrix-based mathematical modeling techniques within a friendly and accessible environment. The system is designed as an open source kit of both off-the-shelf items (such as Lego) and components that are inexpensively fabricated with standard equipment (such as laser cutters). This paper outlines 1) a system of hardware and software for the tangible-interactive matrix, 2) case study applications of the tangible interactive matrix in various disciplines such as urban planning and logistics, and 3) discussion of possible directions for future research and experimental design.
Authors: James Ira Winder, Kent Larson Full Text
Keywords: Interactive displays; tangible user interface; projection mapping; computer vision; decision-support systems; collaboration; Lego
Copyright ©. All Right Reserved.