5th International Conference on Artificial Intelligence and Applications (AIFU 2019)

July 13~14, 2019, Toronto, Canada

Accepted Papers


    Fast And Accurate Trajectory Tracking For Unmanned Aerial Vehicles Based On Deep Reinforcement Learning
    YILANLI, Syracuse University, USA
    ABSTRACT
    Continuous trajectory control of fixed-wing unmanned aerial vehicles (UAVs) is complicated when considering hidden dynamics. Due to UAV multi degrees of freedom, tracking methodologies based on conventional control theory, such as Proportional-Integral-Derivative (PID) has limitations in response time and adjustment robustness, while a model-based approach that calculates force and torques based on UAV's status is complicated and rigid. We present an actor-critic reinforcement learning framework that controls UAV trajectory through a set of desired waypoints. A deep neural network is constructed to learn the optimal tracking policy and reinforcement learning is developed to optimize the resulting tracking scheme. The experimental results show that our proposed approach can achieve 58.14% less position error, 21.77% less system power consumption and 9.23% faster attainment than the baseline. The actor network consists of only linear operations, hence Field Programmable Gate Arrays (FPGA) based hardware acceleration can easily be designed for energy efficient real-time control.
    KEYWORDS

    Deep Reinforcement Learning, Trajectory Tracking, Actor critic model, Unmanned Aerial Vehicles, FPGA

    English To Arabic Machine Translation system Based On Rules -Based Method
    Khaled elmenshawy, Department of Computer Science, Elshourok Academy, Cairo, Egypt
    ABSTRACT
    English-Arabic Machine translation systems have been taking place in machine translation projects in recent years and so, many projects have been carried out to improve the quality of translation into and from Arabic. This research focuses on machine translation from the source language (English) to the target language (Arabic) using an English-Arabic electronic dictionary. The challenges of this research are the difficulty of delivering the appropriate meaning of the source language (SL) in the target language (TL), different sentence structure between two languages, word agreement, ordering problem, verbal forms and linguistic structures. The aim of this research was to design and build an automatic translation system from English to Arabic based on a dictionary of English roots using rule-based method. The proposed machine translation system uses a transfer strategy which is divided into three phases: analysis, transfer and generation of sentences in the target language. The system was evaluated by selecting a set of English language sentences to cover all the structures of sentences as a first stage, and then selecting another set of long sentences that contained more than one structure. All the results of the system were compared with the results of the various translation web on the Internet.
    KEYWORDS

    Machine translation, rule-based approach, Arabic language, English language, sentence structure, morphological analysis.

      Data Augmentation Based On Pixel-Level Image Blend And Domain Adaptation
      Di LIU, Xiao-Chun HOU, Yan-Bo LIU, Lei Liu, Yan-Cheng Wang
      School of Information and Software Engineering, University of Electronic Science and Technology of China, ChengDu, China
      ABSTRACT
      Object detection typically requires a large amount of data to ensure detection accuracy. However, it is often impossible to ensure sufficient data in practice. This paper presents a new data augmentation method based on pixel-level image blend and domain adaptation. This method consists of two steps: 1. Image blend using a labeled dataset as object instances and an unlabeled dataset as background images. 2. Domain adaptation based on Cycle Generative Adversarial Networks (Cycle GAN). A neural network will be trained to transform samples from step 1 to approximate the original dataset. Statistical consistency between new dataset generated by different data augmentation methods and original dataset will be measured by metrics such as generator loss and hellinger distance. Furthermore, a detection/segmentation network for diabetic retinopathy based on Mask R-CNN will be built and trained by the generated dataset. The effect of data augmentation method on the detection accuracy will be presented.
      KEYWORDS

      Data Augmentation, Object Detection, Image Blend, Domain Adaptation, Diabetic Retinopathy

        SEMG BASEDHUMAN MOTION INTENTION RECOGNITION: A REVIEW
        Geng Liu1, LiZhang1, BingHan1, ZheWang1 and JianpingYuan2
        1Shaanxi Engineering Laboratory for Transmissions and Controls, Northwestern Polytechnical University, Xi’an, China
        2Science and Technology on Aerospace Flight Dynamics Laboratory, Northwestern Polytechnical University, Xi’an, China
        ABSTRACT
        Human motion intention recognitionis a key to achieve perfect human-machine coordination and wearing comfort of wearable robots. Surface electromyography (sEMG), as a bioelectrical signal, generates prior to the corresponding motion and reflects the human motion intention directly.Thus, a better human-machine interaction can be achieved by using sEMG based motion intention recognition.In this paper, we review and discuss the state-of-the-art of the sEMG based motion intention recognition that are mainly used in detail.According to the method adopted, motion intention recognition is divided into two groups: sEMG-driven musculoskeletal (MS) model based motion intention recognition and artificial neural network (ANN) model based motion intention recognition.The specific models and recognition effects of each study are analysed and systematically compared. Finally, a discussion of the existing problems in the current studies, major advances and future challenges are presented.
        KEYWORDS

        Neural Network, sEMG,Motion Intention Recognition, Motion Classification, Motion Regression

          Deep Learning for Real-time Gesture Recognition of Brazilian Sign Language
          Gabriel Ilharco Magalhaes and Paulo André Lima de Castro,Instituto Tecnológico de Aeronáutica,Brazil
          ABSTRACT
          In a world with more than 70 million people that rely on sign language to communicate, a system capable of recognizing and translating gestures to written or spoken language has great social impact. In this paper, a state-of-the art approach for real-time gesture recognition for Brazilian Sign Language is presented, using a simple color camera. Two novel datasets, one for static and one for continuous recognition are presented, without restrictions with respect to clothing, background, lighting or distance between the camera and the user, commonly found in other studies. Our approach with Deep Neural Networks doesn’t need to rely on heavily engineered pipelines and feature extraction steps, as opposed to traditional architectures. The proposed system shows robustness for the classification, showing state-of-the-art performance while running in real time on a GPU.