8th International Conference on Cloud Computing: Services and Architecture (CLOUD 2019)

July 13~14, 2019, Toronto, Canada

Accepted Papers


    Service Level Driven Job Scheduling in Multi-Tier Cloud Computing: A Biologically Inspired Approach
    Husam Suleiman and Otman Basir, University of Waterloo, Canada
    ABSTRACT
    Cloud computing environments often have to deal with random-arrival computational workloads that varyin resource requirements and demand high Quality of Service (QoS) obligations. It is typical that a Service-Level-Agreement (SLA) is employed to govern the QoS obligations of the cloud computing service providerto the client. A typical challenge service-providers face every day is maintaining a balance between thelimited resources available for computing and the high QoS requirements of varying random demands.Any imbalance in managing these conflicting objectives may result in either dissatisfied clients and po-tentially significant commercial penalties, or an over-resourced cloud computing environment that can besignificantly costly to acquire and operate. Thus, scheduling the clients’ workloads as they arrive at theenvironment to ensure their timely execution has been a central issue in cloud computing. Various ap-proaches have been reported in the literature to address this problem: Shortest-Queue, Join-Idle-Queue,Round Robin, MinMin, MaxMin, and Least Connection, to name a few. However, optimization strategiesof such approaches fail to capture QoS obligations and their associated commercial penalties. This paperpresents an approach for service-level driven load scheduling and balancing in multi-tier environments.Joint scheduling and balancing operations are employed to distribute and schedule jobs among the re-sources, such that the total waiting time of client jobs is minimized, and thus the potential of a penalty tobe incurred by the service provider is mitigated. A penalty model is used to quantify the penalty the serviceprovider incurs as a function of the jobs’ total waiting time. A Virtual-Queue abstraction is proposed tofacilitate optimal job scheduling at the tier level. This problem is NP-complete, thus, a genetic algorithm isproposed as a tool for computing job schedules that minimize potential QoS penalties and hence minimizethe likelihood of dissatisfied clients.
    KEYWORDS

    Heuristic Optimization, Job Scheduling, Job Allocation, Load Balancing, Multi-Tier Cloud Computing


    Evaluating Service Trustworthiness for Service Selection in Cloud Environment
    Maryam Amiri and Leyli Mohammad-Khanli, University of Tabriz, Iran
    ABSTRACT
    Cloud computing is becoming increasingly popular and more business applications are moving to cloud. In this regard, services that provide similar functional properties are increasing. So, the ability to select a service with the best non-functional properties, corresponding to the user preference, is necessary for the user. This paper presents an Evaluation Framework of Service Trustworthiness (EFST) that evaluates the trustworthiness of equivalent services without need to additional invocations of them. EFST extracts user preference automatically. Then, it assesses trustworthiness of services in two dimensions of qualitative and quantitative metrics based on the experiences of past usage of services. Finally, EFST determines the overall trustworthiness of services using Fuzzy Inference System (FIS). The results of experiments and simulations show that EFST is able to predict the missing values of Quality of Service (QoS) better than other competing approaches. Also, it propels users to select the most appropriate services.
    KEYWORDS

    User preference, cloud service, trustworthiness, QoS metrics, prediction


    Public Cloud Data storage and retrieval Security Model
    Sonia amamou1, Zied trifa2 and Maher khmakhem3, 1MIRACL Research Center, Sfax, Tunisia, 2ISAE Gafsa, University of Gafsa, Tunisia and 3University of King Abdulaziz, Tunisia
    ABSTRACT
    Data protection is considered to be a main facet in order to have a favorable development of the infrastructure of public cloud. This is due to the fact that it mainly indicates the well working of infrastructure especially for the cloud users and also for the cloud providers. Unfortunately, data protection has not yet been provided with the required attention. Nowadays, in order to achieve data protection, cryptographic approaches are recommended as solutions because they can grant some degree of control on data users. Nevertheless, they can reduce the efficiency and also set up an important overhead of any cloud system . In this work, we studied the storage and retrieval in the cloud, which are protection methods. Then, we evaluated their strengths and also their limitations in order to propose a new approach.
    KEYWORDS

    Public Cloud , Data Protection , storage , Cloud Security , Security Model , Data storage , Data retrieval , retrieval


    Threat Modelling for the Virtual Machine Image in Cloud Computing
    Raid Khalid Hussein, University of Southampton, United Kingdom
    ABSTRACT
    Cloud computing is one of the most attractive technology in the era of computing as its ability to reduce the cost of data processing while increasing flexibility and scalability for computer processes. Security is one of the main concerns related to the cloud computing as it hampers the organizations to adopt this technology. Infrastructure as a service (IaaS) is one of the main services of cloud computing which uses virtualization to supply virtualized computing resources to its users through the internet. Virtual Machine Image is the main component in the cloud as it is used to run an instance. There are a number of security issues related to the virtual machine image that need to be analyzed to draw future directions to secure virtual machine as being an essential component related to the cloud computing. Therefore, this paper provides a threat modelling approach to identify the threats that affect the virtual machine image. In addition, threat classification is carried out to each individual threat to find out their negative effects on the cloud computing. Potential attack was drawn to show how an adversary might exploit the weakness in the system to attack the Virtual Machine Image.
    KEYWORDS

    Cloud Security, Virtualization, Virtual Machine Image, Security Threats, Threat modeling First Section


    Secure and privacy preserving sharing of patient information in cloud-based healthcare systems
    Sanjay Sareen and Meenu Sareen, Guru Nanak Dev University, india
    ABSTRACT
    The growing need of sharing health information of patients between healthcare agencies, doctors, and other authorized persons in order to provide better care of patients has resulted in the cloud-based healthcare system. However, the sharing of health information on the cloud raises major security and privacy issues that need to address. This article introduces a framework that protects health information from unauthorized access and lets the patient as data owner decide who the authorized persons are, i.e., who the patient discloses her health information to. In this article, a framework is proposed that presents a new methodology using a combination of different techniques such as information granulation, information pseudonymization, and secret sharing scheme, allowing privacy-preserving primary and secondary use of the health records. In this model, the privacy of patient’s identity is protected so that users and providers of healthcare services do not need to trust the cloud service provider with privacy related issues. We designed and implemented different privacy preserving algorithms on the Amazon EC2 to evaluate the security and performance analysis of our proposed system. The security analysis showed that the framework is secure and protected against common intruder scenarios.
    KEYWORDS

    Pseudonymization; secret sharing scheme; cloud computing; secure data sharing; healthcare systems


    Security Considerations For Edge computing
    John M. Acken1 and Naresh K. Sehgal2, 1Portland State University, Portland, OR and 2Intel Corp, Santa Clara, CA
    ABSTRACT
    Present state of edge computing is an environment of different computing capabilities connecting via awide variety of communication paths. This situation creates both great operational capabilityopportunities and unimaginable security problems. This paper emphasizes that the traditionalapproaches to security of identifying a security threat and developing the technology and policies todefend against that threat are no longer adequate. The wide variety of security levels, computationalcapabilities, and communication channels requires a learning, responsive, varied, and individualizedapproach to information security. We describe a classification of the nature of transactions with respectto security based upon relationships, history, trust status, requested actions and resulting responsechoices. We propose that each element in the edge computing world utilizes a localized ability toestablish an adaptive learning trust model with each entity that communicates with the element
    KEYWORDS

    Edge Computing, Security, Adaptive learning, Trust model, Threats, Cloud Computing, InformationSecurity


    A New Prediction Framework For Improving the Energy Efficiency in Cloud
    Maryam Amiri and Leyli Mohammad-Khanli, University of Tabriz, Iran
    ABSTRACT
    Cloud computing relies on sharing pool of computing resources. It delivers services with availability and scalability obligations to users. On the other hand, green computing and energy efficiency are two main challenges of cloud. Therefore, resource should be allocated in a way that energy consumption is reduced and Quality of Service (QoS) dropping is avoided. Due to the point that storage is one of the biggest energy consumers, this paper focuses on the storage system in cloud. For improving energy efficiency of storage, the next access number of data blocks of disks is predicted. Based on predicted results, data blocks can be transmitted to the most appropriate disks and energy consumption is reduced. The goal of this paper is to represent a new method to predict the next access number of data blocks of disks. The proposed framework is composed of four components: fuzzy classifier, Markov chain, base prediction methods and neural network. Indeed, the main predictor component is neural network. After prediction of next state of data block, data can be distributed on the disks in a way that the number of disks is minimized due to energy consumption reduction and avoidance of Service Level Agreements (SLA) violation. The results of experiments show the high accuracy of the proposed method in comparison with the similar methods.
    KEYWORDS

    Cloud computing, Prediction framework, Energy efficiency, Neural Network, Fuzzy Classifier, Access number


    Trust Modelling for Security of IoT Devices
    Naresh K. Sehgal1, Shiv Shankar2 and John M. Acken3, 1Data Centre Group, Intel Corp, Santa Clara, CA, 2Chief Data Scientist, Maphalli, Bangalore, India and 3Portland State University, Portland, OR
    ABSTRACT
    IoT (Internet of Things), represents many kinds of devices in the field, connected to data-centers via various networks, submitting data, and allow themselves to be controlled. Connected cameras, TV, media players, access control systems, and wireless sensors are becoming pervasive. Their applications include Retail Solutions, Home, Transportation and Automotive, Industrial and Energy etc. This growth also represents security threat, as several hacker attacks been launched using these devices as agents. We explore the current environment and propose a quantitative and qualitative trust model, using a multi-dimensional exploration space, based on the hardware and software stack. This can be extended to any combination of IoT devices, and dynamically updated as the type of applications, deployment environment or any ingredients change.
    KEYWORDS

    Edge Computing, Security, Adaptive learning, Trust model, Threats, Cloud Computing, Information Security


    QOS-Driven Job Scheduling: Multi-Tier Dependency Considerations
    Husam Suleiman and Otman Basir, Department of Electrical and Computer Engineering, University of Waterloo, Canada
    ABSTRACT
    For a cloud service provider, delivering optimal system performance while fulfilling Quality of Service (QoS) obligations is critical for maintaining a viably profitable business. This goal is often hard to attain given the irregular nature of cloud computing jobs. These jobs expect high QoS on an on-demand fashion, that is on random arrival. To optimize the response to such client demands, cloud service providers organize the cloud computing environment as a multi-tier architecture. Each tier executes its designated tasks and passes the job to the next tier; in a fashion similar, but not identical, to the traditional job-shop environments. An optimization process must take place to schedule the appropriate tasks of the job on the resources of the tier, so as to meet the QoS expectations of the job. Existing approaches employ scheduling strategies that consider the performance optimization at the individual resource level and produce optimal single-tier driven schedules. Due to the sequential nature of the multi-tier environment, the impact of such schedules on the performance of other resources and tiers tend to be ignored, resulting in a less than optimal performance when measured at the multi-tier level.
    In this paper, we propose a multi-tier-oriented job scheduling and allocation technique. The scheduling and allocation process is formulated as a problem of assigning jobs to the resource queues of the cloud computing environment, where each resource of the environment employs a queue to hold the jobs assigned to it. The scheduling problem is NP-hard, as such a biologically inspired genetic algorithm is proposed. The computing resources across all tiers of the environment are virtualized in one resource by means of a single queue virtualization. A chromosome that mimics the sequencing and allocation of the tasks in the proposed virtual queue is proposed. System performance is optimized at this chromosome level. Chromosome manipulation rules are enforced to ensure task dependencies are met. The paper reports experimental results to demonstrate the performance of the proposed technique under various conditions and in comparison with other commonly used techniques.
    KEYWORDS

    Cloud Computing, Task Scheduling and Allocation, QoS Optimization, Load Balancing, Genetic Algorithms


    Virtual Enterprise Architecture Supply Chain (VEASC) Model On Cloud Computing: A Simulation-Based Study Through OPNET Modelling
    Tlamelo Phetlhu, Department of Commerce and Law, University of Zululand, KwaZulu Natal, South Africa
    ABSTRACT
    The virtual enterprise architecture supply chain (VEASC) model has been studied in this research employing OPNET modelling and simulations. VEASC requires a synchronous framework of integrated applications and databases, coordination, collaborations, and communications for ensuring high accuracy and responsiveness in a supply chain (SC). The traditional models of electronic data interchange (EDI) and out-of-application methods for messaging and collaborations are not suitable to achieve the full benefits of VEASC because multiple human interventions may be required. In this research, a cloud-based SC application and its distributed databases contributed by multiple supplier and buyer organisations are modelled and simulated on OPNET. The application modelled on the cloud is based on a commercial software called INTEND. The simulation results revealed continuous flow of all the phases of the SC application because the reports reflected continuous interactions between the agents involved and the cloud distributed databases. This model is a good enabler of the VEASC model.
    KEYWORDS

    Supply chain, enterprise architecture, integration, collaboration, cloud computing, OPNET, simulations


    SECURITY ISSUES IN CLOUD-BASED BUSINESSES
    Mohamad Ibrahim AL Ladan, Rafik Hariri University, LEBANON
    ABSTRACT
    Cloud-based Business is a Business running and relying on Cloud computing IT paradigm. Cloud computing is an emerging technology paradigm that transfers current technological and computing concepts into utility-like solutions similar to electricity and communication systems. It provides the full scalability, reliability, computing resources configurability and outsourcing, resource sharing, external data warehousing, and high performance and relatively low cost feasible solutions and services as compared to dedicated infrastructures. Cloud-based Businesses store, access, use, and manage their data and software applications over the internet on a set of servers in the cloud without the need to have them stored/installed locally on their local devices. The cloud technology is used daily by many businesses/people around the world from using web based email services to executing heavy complex business transactions. Like any other emerging technology, Cloud computing comes with a baggage of some pros and cons. It is very useful in business development as it brings amazing results in a timely manner; however, it comes with increasing security and privacy concerns and issues. In this paper we will investigate, analyse, classify, and discuss the new security concerns and issues introduced by cloud computing. In addition, we present some security requirements that address and may alleviate these concerns and issues.
    KEYWORDS

    Cloud-based Business security issues and concerns; Cloud computing security issues and concerns. Cloud computing security requirements


    A MapReduce based Algorithm for Data Migration in a Private Cloud Environment
    Anurag Kumar Pandey, Ruppa K. Thulasiram and A. Thavaneswaran, University of Manitoba, Winnipeg, Canada
    ABSTRACT
    Use of cloud computing has grown quickly and is now a technology that serves the compute, storage and data management needs of individuals, businesses, governments and other organizations. Large data centers are created to serve large number of clients' workload from various walks of life. When a resource in a data center reaches its end-of-life, instead of investing in upgrading, it is possibly the time to decommission such a resource and migrate workloads to other resources in the data center. Data migration between different cloud servers of a given private cloud is risky due to the possibility of data loss. We have designed a MapReduce based algorithm and have introduced few metrics to test and evaluate our proposed framework. We show that our algorithm for data migration works efficiently for text, image, audio and video files with minimum data loss and scale well for large data as well.
    KEYWORDS

    Cloud Computing, Private Cloud, Data Migration, MapReduce, Data Loss, Cost


    INTEGRATING CLOUD COMPUTING TO SOLVE ERP COST CHALLENGE
    Amal Alhosban and Anvitha Akurathi, Department of Computer Science, Engineering and Physics University of Michigan-Flint, MI, USA
    ABSTRACT
    Enterprise Resource Planning (ERP) is a popular business management tool used by almost all companies these days to organize their business. In-spite of the challenges faced by ERP; before, during and after its implementation into the Enterprise, it fetches greater profits to the organization. This paper deals with the challenges faced by ERP with a complete literature overview of the challenges from earlier authors. Then after a brief visit of these factors, a very essential topic to the Enterprises i.e., Costs are discussed. The costs that are incurred in the project, some unknown or hidden costs are dealt with. A solution is proposed to solve this cost problem of ERP and to improve the profit margins to the companies. The solution is Cloud ERP. The latter part deals with the benefits of Cloud ERP in general and with respect to costs along with the concerns of cloud ERP, the major issue among all the concerns and few proposed solutions of solving this problem in the cloud ERP.
    KEYWORDS

    ERP, Cost, Cloud ERP, Security


    Enabling Edge Computing Using Container Orchestration and Software Defined Wide Area Networks
    Felipe Rodriguez Yaguache1 and Kimmo Ahola2, 1School of Electrical Engineering, Aalto University, Espoo, Finland and 25G Networks & Beyond, Technical Research Centre of Finland (VTT), Espoo, Finland
    ABSTRACT
    With SD-WAN being increasingly adopted by corporations, and Kubernetes becoming the de-facto container orchestration tool, the opportunities for deploying edge-computing applications running over a SD-WAN scenario are vast. In this document, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing an improved traf ic handling and better user experience, is developed. First, a proof-of-concept SD-WAN topology was implemented alongside a Kubernetes cluster and the in-house service discovery solution. Next, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
    KEYWORDS

    SD-WAN, Edge computing, Virtualization, Kubernetes, Services


    Context-Aware Trust-Based Access Control For Ubiquitous Systems
    Malika Yaici, Faiza Ainennas and Nassima Zidi, Computer Department, University of Bejaia, Bejaia, Algeria
    ABSTRACT
    The ubiquitous computing and context-aware applications experience at the present time a very important development. This has led organizations to open more of their information systems, making them available anywhere, at any time and integrating the dimension of mobile users. This cannot be done without taking into account thoughtfully the access security: a pervasive information system must henceforth be able to take into account the contextual features to ensure a robust access control. In this paper, access control and a few existing mechanisms have been exposed. It is intended to show the importance of taking into account context during a request for access. In this regard, our proposal incorporates the concept of trust to establish a trust relationship according to three contextual constraints (location, social situation and time) in order to decide to grant or deny the access request of a user to a service
    KEYWORDS

    Pervasive systems, Access Control, RBAC, Context-awareness, Trust management


    An Ontology Based Approach to Improve Process Mining Result In Univer sity Information system
    Maryem Dellai and Yemna Sayeb, Research Laboratory, ISAMM, Manouba, Tunisia
    ABSTRACT
    Process mining algorithms use event logs to extract process-related information, to discover, analyze conformance, or to enhance processes. Event logs can be used to analyze and visualize the processes with better insight and improved formal access to the data. Most process mining (PM) applications are based on event logs with keyword-based activity and resource descriptions. In recent years, lots of efforts are dedicated to explore logic-based ontology formalisms.In this research work, we use ontologies that are intended to define the semantics of recorded events. The highest quality of event logs requires the existence of ontologies to which events and attributes point. Many human-designed processes are based on explicit workflow or lifecycle models which can be described using taxonomies or more complicated ontologies. Ontologies have been successfully applied to represent the knowledge in many domains. In this paper, we introduce an approach for enriching event logs using Process mining with associated ontology structures. Our proposal is to provide features that help integrating event logs from event sources in order to extract data and put it into a suitable format semantically enriched so that the data can be exploited with process mining tools (ProM).
    KEYWORDS

    process mining, event logs, ontologies, process model, Petri net.


    Construction Of an Oral Cancer Auto-Classify system Based On Machine- Learning for Artificial Intelligence
    Meng-Jia Lian1, Chih-Ling Huang2, Tzer-Min Lee1,3
    1 School of Dentistry, Kaohsiung Medical University, Kaohsiung, Taiwan
    2 Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung, Taiwan
    3Institute of Oral Medicine, National Cheng Kung University Medical College, Tainan
    ABSTRACT
    Oral cancer is one of the most prevalent tumors of the head and neck region. An earlier diagnosis can help dentist getting a better therapy plan, giving patients a better treatment and the reliable techniques for detecting oral cancer cells are urgently required. This study proposes an optic and automation method using reflection images obtained with scanned laser pico-projection system, and Gray-Level Cooccurrence Matrix for sampling. Moreover, the artificial intelligence technology, Support Vector Machine, was used to classify samples. Normal Oral Keratinocyte and dysplastic oral keratinocyte were simulating the evolvement of cancer to be classified. The accuracy in distinguishing two cells has reached 85.22%. Compared to existing diagnosis methods, the proposed method possesses many advantages, including a lower cost, a larger sample size, an instant, a non-invasive, and a more reliable diagnostic performance. As a result, it provides a highly promising solution for the early diagnosis of oral squamous carcinoma.
    KEYWORDS

    Oral Cancer Cell, Normal Oral Keratinocyte (NOK), Dysplastic oral keratinocyte (DOK), Gray-Level Cooccurrence Matrix (GLCM), Scanned Laser Pico-Projection (SLPP), Support Vector Machine (SVM), Machine-Learning


    Automatic Extraction of Feature Lines on 3D Surface
    Zhihong Mao,Division of Intelligent Manufacturing, Wuyi University, Jiangmen, China
    ABSTRACT
    Many applications in mesh processing require the detection of feature lines. Feature lines convey the inherent features of the shape. Existing techniques to find feature lines in discrete surfaces are relied on user-specified thresholds and are inaccurate and time-consuming. We use an automatic approximation technique to estimate the optimal threshold for detecting feature lines. Some examples are presented to show our method is effective, which leads to improve the feature lines visualization.
    KEYWORDS

    Feature Lines; Extraction; Meshes .


    HMM-Based Dari Named Entity Recognition for Information Extraction
    Ghezal Ahmad Jan, Zia
    Department of Models and Theory of Distributed Systems, TU Berlin Straße des 17. Juni 135, 10623 Berlin, Germany
    ABSTRACT
    Named Entity Recognition (NER) is the fundamental subtask of information extraction systems that labels elements into categories such as persons, organizations or locations. The task of NER is to detect and classify words that are parts of sentences. This paper describes a statistical approach to modeling NER on the Dari language.Dari and Pashto are low resources languages, spoken as official languages in Afghanistan. Unlike other languages, named entity detection approaches differ in Dari. Since in Dari language there is no capitalization for identifying named entities. We seek to bridge the gap between Dari linguistic structure and supervised learning model that predict the sequences of words paired with a sequence of tags as outputs. Dari corpus was developed from the collection of news, reports and articles based on the original orthographic structure of the Dari language. The experimental result presents the named entity recognition performance 95% accuracy.
    KEYWORDS

    Natural Language Processing (NLP), Hidden Markov Model (HMM), Named Entity Recognition (NER), Part-of-Speech (POS) Tagging


    Designing Dynamic Protocol for Real-Time IIoT-based Applications by Efficient Management of System Resources
    Farzad Kiani, Sajjad Nematzadehmiandoab, Amir Seyyedabbasi
    Computer Engineering Dept., Engineering and Natural Sciences Faculty at Istanbul Sabahattin Zaim University, Kucukcekmece, 34303, Istanbul, Turkey
    ABSTRACT
    Due to increased applicability, wireless sensor networks have captured the attention of researchers from various fields. These networks still suffer from various challenges and limitations regardless. These problems are even much more pronounced in some areas of the field such as real time IoT based applications. Here, a dynamic protocol that efficiently utilizes the available resources is proposed. The protocol employs five developed algorithms that aid the data transmission, neighbor and optimal path finding processes. The protocol can be utilized in, but not limited to, real-time large data streaming applications.. In this paper is defined a structure that enables the sensor devices to communicate with each other over their local network or internet as required in order to preserve the available resources. Both theoretical and experimental result analysis of the entire protocol in general and individual algorithms is also performed.
    KEYWORDS

    Big data wireless sensor networks, real-time systems, energy efficiency, routing protocol, IoT.


      Interactive Mesh Cutout Using Graph Cuts
      Zhihong Mao,Division of Intelligent Manufacturing, Wuyi University, Jiangmen529020, China
      ABSTRACT
      Mesh segmentation is a foundational operation for many computer graphics applications. Although various automatic segmentation schemes have been proposed, to precisely obtain the meaningful part of a mesh is a challenging issue. In this paper, we introduce an Interactive system to efficiently extract meaningful objects from a triangular mesh. The algorithm proposed in this paper extends min-cut based on 2D-image segmentation techniques to the domain of 3D mesh. We also provide a screen-space user interface that allows the user to indicate the meaningful object easily. In our system, quadric-based surface simplification is adopted for a large mesh, we use min-cut in the simplified mesh, then graph cuts are used to refine the previous cuts in the original mesh. The results show that our proposed method is relatively simple and effective as a powerful tool for mesh cutout.
      KEYWORDS

      Mesh Segmentation, Mesh Cutout, Graph cuts.


      Tough Random Symmetric 3-SAT Generator
      Robert Amador1, Chang-Yu Hsieh2 , Chen-Fu Chiang3
      1,3Department of Computer Science, State University of New York Polytechnic Institute, Utica, NY 13502, USA,2Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
      ABSTRACT
      We designed and implemented an efficient tough random symmetric 3-SAT generator. We quantify the hardness in terms of CPU time, numbers of restarts, decisions, propagations, conflicts and conflicted literals that occur when a solver tries to solve 3-SAT instances. In this experiment, the clause variable ratio was chosen to be around the conventional critical phase transition number 4.24. The experiment shows that instances generated by our generator are significantly harder than instances generated by the Tough K-SAT generator. The difference in hardness between two SAT instance generators exponentiates as the number of boolean variables used increases.

      Data Analysis of Wireless Networks Using Classification Techniques
      Daniel Rosa Canêdo1,2 and Alexandre Ricardo Soares Romariz1
      1Department of Electrical Engineering, University of Brasília, Brasília, Brazil ,2Federal Institute of Goiás, Luziânia, Brazil
      ABSTRACT
      In the last decade, there has been a great technological advance in the infrastructure of mobile technologies. The increase in the use of wireless local area networks and the use of satellite services are also noticed. The high utilization rate of mobile devices for various purposes makes clear the need to monitor wireless networks to ensure the integrity and confidentiality of the information transmitted. Therefore, it is necessary to quickly and efficiently identify the normal and abnormal traffic of such networks, so that administrators can take action. This work aims to analyze classification techniques in relation to data from Wireless Networks, using some classes of anomalies pre-established according to some defined criteria of the MAC layer. For data analysis, WEKA Data Mining software (Waikato Environment for Knowledge Analysis) is used. The classification algorithms present a success rate in the classification of viable data, being indicated in the use of intrusion detection systems for wireless networks.
      KEYWORDS

      Wireless Networks, Classification Thecniques, Weka.


      A Survey Of State-Of-The-Art GAN-Based Approaches To Image Synthesis
      Shirin Nasr Esfahani and Shahram Latifi,University of Nevada, Las Vegas,USA.
      ABSTRACT
      In the past few years, Generative Adversarial Networks (GANs) have received immense attention by researchers in a variety of application domains. This new field of deep learning has been growing rapidly and has provided a way to learn deep representations without extensive use of annotated training data. Their achievements may be used in a variety of applications, including speech synthesis, image and video generation, semantic image editing, and style transfer. Image synthesis is an important component of expert systems and it attracted much attention since the introduction of GANs. However, GANs are known to be difficult to train especially when they try to generate high resolution images. This paper gives a through overview of the state-of-the-art GANs-based approaches in four applicable areas of image generation including Text-to-Image-Synthesis, Image- to- Image-Translation, Face Aging, and 3D Image Synthesis. Experimental results show state-of-the-art performance using GANs compared to traditional approaches in the fields of image processing and machine vision.
      KEYWORDS

      Conditional generative adversarial networks (cGANs), image synthesis, image-to-image translation, text-to-image synthesis, 3D GANs.


      A Call Graph Reduction based Novel Storage Allocation for Smart City Applications
      Prabhdeep Singh, Rajvir Kaur, Diljot Singh, Vivek Gupta,Punjabi University, India.
      ABSTRACT
      Today's world is going to be smart even smarter day by day. Smart cities play an important role to make the world smart. Thousands of smart city applications are developing in every day. Every second very huge amount of data is generated. The data need to be managed and stored properly so that information can be extracted using various emerging technologies. The main aim of this paper is to propose a storage scheme for data generated by smart city applications. A matrix is used which store the information of each adjacency node of each level as well as the weight and frequency of call graph. It has been experimentally depicted that the applied algorithm reduces the size of the call graph without changing the basic structure without any loss of information. Once the graph is generated from the source code, it is stored in the matrix and reduced appropriately using the proposed algorithm. The proposed algorithm is also compared to another call graph reduction techniques and it has been experimentally evaluated that the proposed algorithm significantly reduces the graph and store the smart city application data efficiently

      Comparing String Similarity Measures In The Task Of Name Matching
      Aleksandra Zaba,University of Utah, USA.
      ABSTRACT
      This pilot study reports recall, precision, and f-measures for three groups of string similarity algorithms contained in the ‘stringdist’ package of R, the edit-based Levenshtein, full Levenshtein-Damerau, Hamming, and longest common substring, the q-gram based Jaccard, q-gram, and cosine measures, and the heuristic Jaro and Jaro-Winkler. The algorithms are to specify values for the similarity between a base word, a female first name, and three of its variants, that same name, and two of the following: Its foreign version (categorized by us as ‘same’), its male version (‘different’), and a different, also female, version of the base name in American English (‘different’). We report f-measures, and these are interpreted in the context of the given algorithm. For our data so far, a relatively low threshold (from ‘match’ to ‘not match’; assigned by us to an algorithm’s value for a given similarity) provides the highest weighted average of recall and precision.
      KEYWORDS

      Artificial Intelligence, Natural Language Processing, String Similarity Algorithms, R, F-Measure.


    Performance Comparison Of Web-Based Book Recommender Systems
    Swathi S Bhat, Pranav P, Shashank K V and Arpitha Raghunandan, National Institute of Technology, India.
    ABSTRACT
    Recommendation systems are being widely used for personalization on the web today. E-Commerce giants rely highly on their recommendation systems to improve their business. As a result, the quality of recommendations can have a significant impact on their sales. Hence, proper evaluation of such recommender systems is important. Traditional evaluation metrics are limited to error based and accuracy metrics and do not take into consideration factors like diversity, novelty, informedness, markedness etc. We aim to perform a comprehensive performance comparison of two web-based book recommendation systems using lesser known but equally important metrics like diversity, informedness and markedness.
    KEYWORDS

    Recommendation systems, diversity, metrics, informedness, markedness, precision, recall, ROC, performance testing


Obstacle Avoidance Robot
Faisal Imran and Dr.Yin Yunfei, Chongqing University, China.
ABSTRACT
Obstacle avoidance is one of the most important aspects of mobile robots. Without it, the movement of the robot will be very strict and fragile. A robot is a machine that can perform tasks automatically or perform tasks under the direction of a robot. Robotics is a combination of computational intelligence and physical machines (motors). Computational intelligence means programmatic instructions. The project proposes a robotic vehicle with an integrated intelligent device to guide you as it enters your path. The robotic vehicle is manufactured using the AT8 mega-8 series microcontrollers. Ultrasonic sensors are used to detect any obstacle in front of them and send commands to the microcontroller. The ultrasonic sensors are used to detect any obstacle in front of them and send commands to the microcontroller. Depending on the input signal received, when the connected motor is driven by the motor controller, the microcontroller redirects the robot to move in alternate directions. The evaluation of the performance of the system shows the probability of failure with an accuracy of 85% and 0.15, respectively. We made a robotic vehicle that moves in different directions forward, backward, to the left and to the right when the entrance is given. The objective of our project is to create an automated robot that intelligently detects obstacles in its path and navigates according to the actions we configure.
KEYWORDS

obstacle avoidance, ultrasonic sensor, arduino microcontroller, autonomous robot, arduino software.


    A Performanceof Peano Koch Hybrid Fractal Antenna For 2.4 and 5.5GHZ Applications
    Er. Inkwinder Singh Bangi and Dr. Jagtar Singh Sivia, Punjabi University, India.
    ABSTRACT
    In this modernization, demand of wireless devices tremendously increased. The antenna is part and parcel component of every wireless electronic gadget. Thanks to hybrid fractal technology, single antenna used for various applications. In this article, hybrid fractal antenna is designed using Peano and Koch antenna. The performance of hybrid fractal antenna is scrutinized and anticipate various antenna parameters to judge antenna behavior. The hybrid fractal antenna dimensions are 34x42 mm2 and proposed antenna is operated on GHz frequency. The small size antenna has less than 2 value of VSWR at every resonant frequencies. Proposed antenna is light in weight because it designed on FR4 epoxy material and cheaper in price. The current distribution and radiation pattern also demonstrate the omni directional radiation of electromagnetic waves. Max gain is 18dB at 2.43GHz at unlicensed band for Bluetooth application. Proposed antenna is also operated at 5.5GHz for Wi-Fi and WLAN applications and 3G cellular communication (1.90-1.98GHz).
    KEYWORDS

    Peano, Koch, GHz, VSWR, FR4.


    Cloud Computing: Issues And Risks Of Embracing The Cloud In A Business Environment
    Shafat Khan, Himalayan University, India.
    ABSTRACT
    Cloud computing is a swiftly advancing paradigm that is drastically changing the way people utilize their PCs. Over the latest couple of years, cloud computing has created from being a promising business thought to one of the rapidly creating portions of the IT business. Despite the boom of cloud and the numerous favorable circumstances, for example, financial advantage, a rapid elastic resource pool, and on-demand benefit, endeavor clients are yet hesitant to send their business in the cloud and the paradigm likewise makes difficulties for the two clients and suppliers. There are issues, for example, unapproved get to, loss of protection, information replication, and administrative infringement that require enough consideration. An absence of fitting answers for such difficulties may cause dangers, which may exceed the normal advantages of utilizing the paradigm. To address the difficulties and related dangers, an orderly hazard the board practice is vital that guides clients dissect the two advantages and dangers identified with cloud-based frameworks. The point of this paper is to provide better comprehension to configuration difficulties of cloud computing and distinguish essential research heading in such manner as this is an expanding area.
    KEYWORDS

    Cloud computing; Data center; Risks; Challenges; Security; Business.


    Prediction Model of SCR Outlet NOx Based on LSTM Algorithm
    JiyuChen, Feng Hong, MingmingGao, TaihuaChang, LiyingXu, North China Electric Power University, China.
    ABSTRACT
    Pollutants emissions is strictly controlled in modern power plants, and Nitrogen Oxides (NOx), which is the main contaminants is the exhaust gas. The Selective Catalytic Reduction process (SCR) is commonly used for denitration. For achieving an effective the SCR outlet NOx concentration control, an accurate outlet NOx concentration model is necessary. A model using historical data is proposed, and long-short term memory(LSTM) algorithm is applied, which could describe relevance in time series. The accuracy performances for proposed data-driven model are verified, and root mean square error ( RMSE ) and mean absolute error (MAPE) for training set are, 0.706 mg/m3 and 1.99%, respectively, which for test set are 1.44 mg/m3 and 2.90%, respectively, The verification reveals that the accuracy for data-driven model is acceptable for control system design.
    KEYWORDS

    LSTM, SCR, desulfurization and denitration, NOx content at outlet


IoT -Based Approach To Monitor Parking Space In Cities
Fatin Farhan Haque1, Weijia Zhou1, Jun-Shuo Ng2, Frank Walsh2, Kumar Yelamarthi1, Ahmed Abdelgawad1, 1Central Michigan University, USA, 2Waterford Institute of Technology Waterford, Ireland.
ABSTRACT
Internet of Things is the next big thing, as almost everything developed now has an extensive use of data which is then used to get the daily statistics and usage of every individual. The work mainly consists of constructing a screen where the parking space will be shown, and a camera module will be set up, and PIR (Passive Infrared Sensor) will be at the entrance to detect the entrance of a car or any vehicle eligible to park at the lot. The vehicle will be scanned for its registration number in to provide a check whether the vehicle is registered to park or not. This also acts as the security of the parking lot. Moreover, a viable sensor will be placed at each parking slot through which the vacancy of each parking slot will be shown to determine the exact spot available to the user. In order to surpass the project completion, we will be using Raspberry Pi 3 with camera module mounted on it and by using Tensorflow, Node-Red we would be able to identify the car and the license number and also infrared sensor to detect the parking availability which would be displayed on the screen.
KEYWORDS

IoT, Node-Red, Tensor Flow, smart, parking.


Reconstruction With High Resolution Sar Tomography VIA Compressive Sensing
Ishak Daoud1, Assia Kourgli2, Aichouche Belhadj Aissa2.
1Telecommunications and information processing, Faculty of Electronics and Computer Science, Laboratory of Image processing and radiation, University of Science and Technology Houari Boumediene, Algiers and 2Laboratory of Image processing and radiation, University of Science and Technology Houari Boumediene, Algiers, Algeria
ABSTRACT
The SAR tomography, is anapproach that uses multi-pass SAR images to decompose the target basing on its backscatter mechanisms, this decomposition helps to generate the reflectivity profile on the elevation axis. However, the common Rayleigh resolution related to the Nyquist condition, can cause quality problems in elevation, due to the low number of acquisitions, the non-regular distribution of the baseline and its small aperture in orthogonal baseline. The work we have presented in this paper, concerns the reconstruction of the reflectivity profile of TerraSAR-x radar target images by exploiting the concept of CS ‘Compression Sensing’, assuming that the target has generally a spars representation along the elevation direction. We have presented also, a reconstruction for some simulated profiles based on the radar characteristics of TERRASR-X, using some algorithms asBasis Pursuit‘BP’ and Basis Pursuit Denoised ‘BPDN’, to well Simulated a real reconstruction when the measurements are noised. Based on the results obtained by the convex reconstruction algorithmsimplemented on MATLAB, we have shown how the number of measures necessary for reconstruction can be reduced and how reconstructed samples can be increased, which can bring Better resolution in elevation for anoisy measurement, based on a lemma that binds the sparsity, the total reconstruct samples and the number of measures.
KEYWORDS

SAR Tomography Compressive sensing, sparsity, L1&L0 norm-minimization, SAR tomography, Restricted Isometry Property.


Particle Visualization Systemn Based on the Scattering Of Light Produced By a Slide Beam Laser In a Clean Room Using Image Processing
O. Juina 1, SC Hu2 and T.Lin3, 1Department of Mechanical and Automation Engineering, National Taipei University of Technology, Taipei, 10608 Taiwan,R.O.C and 2,3Department of Energy and Refrigerating Air-Conditioning Engineering, National Taipei University of Technology, Taipei, 10608 Taiwan, R.O.C
ABSTRACT
In the field of clean room systems, the need to have high standards of cleaning and environmental control has generated the creation of new equipment which can solve the different problems of monitoring in particle filtration. The system proposed below has been developed based on new technologies like the evolution of camera sensors, the use of a beam laser to visualize particles, and the link between programming algorithms with free platforms. The first system used consisted of a Canon 650D camera with a 17-55mm lens, a Ld Pumped All-SolidState Green Laser. Tests were performed inside a controlled environment where the external light insulation was removed, a Transparent FOUP (Front Opening Unified Pod) where the sample of white marble dust was introduced to see its dispersion among the particles. At that moment, the photos were taken at different angles of incidence such as against the laser and 60 degrees with reference to the line of action of the laser. Furthermore, other tests were carried out in an external environment, where photographs of particles from the human body were taken. In order to implement a comparison of results, we used a Second System composed by a Camera HighResolution CMOS Sensor with Global Shutter lt225. The Ld Pumped All-Solid-State Green Laser, using the same parameters as the first system. Cases were repeated with the transparent FOUP and large particles from the body. Simulating the normal movements inside a clean room. The third stage, is the image processing using OpenCV libraries, in this case, EmguCV which processes images, the fundamental principle of the image processing is the reading of each pixel, the intensities of each pixel and in the case of the processing of black and white images, each pixel receives values from 0 to 255, with 0 being the value for black and 255 for white. The program algorithm responds to these values and will separate the high-intensity values from the low-intensity values. In this case, the green color will become an important value, which by means of mathematical filters, will generate a clearer image of where the particles are. The main features of the images taken by the camera Canon 650D are its resolution, which at short distances allows displaying small particles in the order of 12um but, its limitation is its speed of capture. On the other hand, the processed results of the photographs taken by the CMOS camera showed greater accuracy in frame per second, but a large amount of storage memory is required due to the capture speed.
KEYWORDS

Particle Visualization System, FOUP (Front Opening Unified Pod), CMOS Sensor, OpenCV.


Blind Image Quality Assessment Using Singular Value Decomposition Based Dominant Eigenvectors For Feature Selection
Besma Sadou, Atidel Lahoulou and Toufik Bouden, University of Jijel, Algeria
ABSTRACT
In this paper, a new no-reference image quality assessment (NR-IQA) metric for grey images is proposed using LIVE II image database. The features used are extracted from three well-known NR-IQA objective metrics based on natural scene statistical attributes from three different domains. These metrics may contain redundant, noisy or less informative features which affect the quality score prediction. In order to overcome this drawback, the first step of our work consists in selecting the most relevant image quality features by using Singular Value Decomposition (SVD) based dominant eigenvectors. The second step is performed by employing Relevance Vector Machine (RVM) to learn the mapping between the previously selected features and human opinion scores. Simulations demonstrate that the proposed metric performs very well in terms of correlation and monotonicity.
KEYWORDS

Natural Scene Statistics (NSS), Singular Value Decomposition (SVD), dominant eigenvectors, Relevance Vector Machine (RVM).


A PCM Receiver for Decoding the Deviated Clock Frequency Data Using FPGA Based Embedded System
Ahmed N. Sayed, Ahmed M. Abdelrazik, Mostafa M. Elhashash, Ahmed G. Khodary and Ali Maher, Military Technical College, Cairo, Egypt
ABSTRACT
PCM decommutator is an essential subsystem in any communication system to decode the received parameters. In this work, A full PCM receiver is presented. The proposed PCM receiver is designed utilizing FPGA-based embedded system through USB-UART interfacing. The designed hardware consists of four major parts; a PCM frequency detector, decommutator, shift register and USB-UART interface. The shift register is designed to save the whole decommutated frame until sending it to a PC. The USB-UART interface is designed to enable the communication between the FPGA and the PC USB port. The proposed PCM receiver utilizes each coming pulse to update the sampling frequency to overcome the transmitter clock frequency drift. For the sake of system validation, a PCM transmitter is implemented on another FPGA board for sending clock deviated data to the proposed receiver. Moreover, a monitoring program is implemented on a PC to receive and process the decoded data.
KEYWORDS

PCM decommutator, FPGA-based embedded systems, Digital signal processing & Clock frequency deviation


Sea Surface Electromagnetic Scattering Characteristics of JONSWAP Spectrum Influenced by its Parameters
Xiao lin Mi, Xiaobing Wang, Xinyi He, Fei Dai, Science and Technology on Electromagnetic Scattering Laboratory,Shanghai China
ABSTRACT
The JONSWAP spectrum sea surface is mainly determined by parameters such as the wind speed, the fetch length and the peak enhancement factor. In view of the study of electromagnetic scattering from JONSWAP spectrum sea surface, we need to determine the above parameters. In this paper, we use the double summation model to generate the multi-directional irregular rough JONSWAP sea surface and analyze the distribution concentration parameter and the peak enhancement factor’s influence on the rough sea surface model, then using physical optics method to analysis the JONSWAP spectrum sea surface’s average backward scattering coefficient change with the different distribution concentration parameters and the peak enhancement factors, the simulation results show that the peak enhancement factor influence on the ocean surface of the average backward scattering coefficient is less than 1 dB, but the distribution concentration parameter influence on the JONSWAP surface of the average backward scattering coefficient is more than 5 dB. Therefore, when we study the electromagnetic scattering of the JONSWAP spectral sea surface, the peak enhancement factor can be taken as the mean value but the distribution concentration parameter have to be determined by the wave growth state.
KEYWORDS

JONSWAP spectrum, multidirectional wave, wave pool, the peak enhancement factor, electromagnetic scattering


Three-Dimensional Reconstruction Using the Depth Map
1A.El abderrahmani, 2R.Lasri and 3K.Satori, 1Advance Technology Lab, Department of Computer Sciences, Larache Poly Disciplinary School, Abdelmalek Essaâdi University, 2Advance Technology Lab, Department of Computer Sciences, Larache Poly Disciplinary School, Abdelmalek Essaâdi University and 3LIIAN, Department of Mathematic & Computer Sciences Dhar-Mahraz Sciences School, FEZ, MOROCCO
ABSTRACT
This paper presents an approach to reconstructing 3D objects based on the generation of dense depth map. From a two 2D images (a pair of images) of the same 3D object, taken from different points of view, a new grayscale image is estimated. It is an intermediate image between a purely 2D image and a 3D image where each pixel of this image represents a z-height according to its gray level value. Our objective therefore is to play on the precision of this map in order to prove the interest and effectiveness of this map on the quality of the reconstruction.
KEYWORDS

Dense reconstruction, depth map, disparity map, camera parameters