Browsing by Author "Ozyer, B."
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Conference Object Balancing İnverted Pendulum Using Reinforcement Algorithms(Institute of Electrical and Electronics Engineers Inc., 2016) Özakar, R.; Tumuklu Ozyer, G.T.; Ozyer, B.With the advancements in technology, robots has become systems that can learn and achieve complex behaviors in real life with the help of machine learning algorithms. Among those algorithms, reinforcement learning algorithms are widely used in robotics to teach the systems by trials and errors. In this work, our goal is to use the two different reinforcement algorithms, Q-learning and Adaptive Heuristic Critic (AHC) algorithm, on well-known cart-pole balancing problem and examine the performance results. We used Box2d physics engine simulator to simulate the cart-pole model and the environment. Observing the experimental results, AHC algorithm was able to balance the system for more step counts than Q-learning algorithm. © 2016 IEEE.Conference Object Ball-Cradling Using Reinforcement Algorithms(Institute of Electrical and Electronics Engineers Inc., 2017) Özakar, R.; Ozyer, B.To successfully learn and execute a new task autonomously is a complex problem for robots. Planning of this learning is based on human behaviors and human learning. For this purpose, a machine learning method, reinforcement learning has been developed. In this work, we modeled a problem called ball-cradling, where we made moving links as fingers, we taught the links using reinforcement algorithms to balance a ball without dropping, which falls from above. Q-learning, SARSA and Adaptive Heuristic Critic (AHC) algorithms were tested using Box2d simulator on this problem. Ball's position, ball's linear velocity, links' angle and links' angular velocity were used as state-space parameters. In the results, system managed to balance the ball without dropping it to the ground in single-link system for all algorithms. In two-link system, a successful learning hasn't been achieved. AHC algorithm showed a better learning performance compared to other algorithms. © 2016 The Chamber of Turkish Electrical Engineers.Article Deep Learning Enhanced Monocular Visual Odometry: Advancements in Fusion Mechanisms and Training Strategies(Elsevier, 2025) Simsek, E.; Ozyer, B.Recent advances in deep learning have revolutionized robotic applications such as 3D mapping, visual navigation and autonomous control. Monocular Visual Odometry (MVO) represents a critical advancement in autonomous systems, particularly drones, utilizing single-camera setups to navigate complex environments effectively. This review explores MVO's evolution from traditional methods to its integration with cutting edge technologies like deep learning and semantic understanding. In this study, we explore the latest training strategies, innovations in model architecture, and advanced fusion techniques used in hybrid models that combine depth and semantic information. A comprehensive literature review traces the evolution of MVO techniques, highlighting key datasets and performance metrics. Section 2 outlines the problem, while Section 3 reviews the studies, charting the evolution of MVO techniques predating the advent of deep learning. Section 4 details the methodology, focusing on cutting-edge training strategies, advancements in architectural designs, and fusion techniques in hybrid models integrating depth and semantic information. Finally, Section 5 summarizes findings, discusses implications, and suggests future research directions.Article Deepdct-Vo: 3d Directional Coordinate Transformation for Low-Complexity Monocular Visual Odometry Using Deep Learning(Elsevier, 2025) Simsek, E.; Ozyer, B.Deep learning-based monocular visual odometry has gained importance in robotics and autonomous navigation due to its robustness in visually challenging environments and minimal sensor requirements. However, many existing deep learning-based MVO methods suffer from high computational costs and large model sizes, making them less suitable for real-time applications in resource-limited systems. In this study, we propose DeepDCT-VO, a lightweight visual odometry method that combines three-dimensional directional coordinate transformation with a compact deep learning architecture. Unlike traditional approaches that estimate translation in a global coordinate system and are prone to drift accumulation, DeepDCT-VO uses local directional motion derived from composite rotations. This approach avoids global trajectory reconstruction, thereby improving the method's stability and reliability. The proposed model operates on input images at multiple resolutions (120 x 120, 240 x 240, 360 x 360, and 480 x 480), leveraging attention-guided residual learning to extract robust features. Additionally, it incorporates multi-modal information-specifically depth and semantic maps-to further improve the accuracy of pose estimation. Evaluations on the KITTI odometry benchmark demonstrate that DeepDCT-VO achieves competitive trajectory estimation accuracy while maintaining real-time performance-8 ms per frame on GPU and 12 ms on CPU. Compared to the existing method with the lowest translational drift (trel), DeepDCT-VO reduces model size by approximately 96.3% (from 37.5 million to 1.4 million parameters). Conversely, when compared to the lightest model in terms of parameter count, DeepDCT-VO reduces trel from 8.57% to 1.69%, achieving an 80.3% reduction in translational drift. These results underscore the effectiveness of DeepDCT-VO in delivering accurate and efficient monocular visual odometry, particularly suited for embedded and resource-limited applications, while the proposed transformation method offers an auxiliary function in reducing translational complexity.Conference Object Development of Intelligent Ground Control Station and Autopilot Architectures for UAVs and UGVs(Institute of Electrical and Electronics Engineers Inc., 2024) Cintas, E.; Ozyer, B.Multi-rotor Unmanned Aerial and Ground Vehicles (UAV/UGV) are utilized for various military and civilian purposes such as detection, mapping, surveillance, target destruction, observation, and logistical applications due to their high agility, mechanical simplicity, ease of control, and autonomous capabilities. With advancing technology, UAVs and UGVs can be remotely controlled autonomously by onboard computers or by a pilot at a ground control station (GCS). Integrating these onboard computers into UAVs/UGVs enables information flow between the different modules they possess and the GCS. However, due to the traditional processing of this information flow in GCS software studies in the literature, there is a need for different and intelligent GCS software and autopilot design approaches. In this study, an intelligent new GCS capable of remotely commanding and controlling UAVs/UGVs has been designed, equipped with artificial intelligence features, capable of real-time recognition, positioning of more than seventy objects, and communication with UAVs/UGVs. The open-source Java programming language was used for coding operations, and the MAVLink (Micro Air Vehicle Link) protocol was utilized for communication between UAVs/UGVs and the GCS. For real-time object detection and localization, one of the fast and highly accurate algorithms in the literature, SSD MobileNet v2, was used and integrated with the GCS via Flask API. The obtained results demonstrated the consistency between the initial motivation and the software outputs, leading to the development of an open-source intelligent ground control station software and autopilot. © 2024 IEEE.Article Ontology-Based Instantaneous Route Suggestion of Enemy Warplanes with Unknown Mission Profile(Sakarya University, 2020) Cintas, E.; Ozyer, B.; Hanay, Y.S.The routes of warplanes are planned confidentially, and they are not shared with any organization in advance. In some cases, border violations may occur, and as a result, it increases the tension between two states. This situation puts many people at risk and impairs the prestige of the state both economically and socially. In this paper, Ontology-Based Instantaneous Route Suggestion System (SUARSIS) based on semantic approach is proposed to predict and plan routes of warplanes before they reach their target. In the proposed system, we developed an architecture called Ontology-based Route Suggestion by using the OWL (Web Ontology Language) language with realistic data. The aircraft model, aircraft fuel system, features of the military field, and the relations in the semantic context are logically defined through ontology. Synthetic scenarios were created to validate the accuracy of the proposed method. Experimental results show that the proposed system has a good performance on predicting warplane routes. © 2020, Sakarya University. All rights reserved.Conference Object Recognizing Self-Stimulatory Behaviours for Autism Spectrum Disorders(Institute of Electrical and Electronics Engineers Inc., 2020) Kacdioglu, S.; Ozyer, B.; Tumuklu Ozyer, G.T.Autism spectrum disorder (ASD) is a neurobiological disorder that some symptoms such as deficit of social interaction and communication, limited and repeated behavior are observed in patients. Repetitive behaviors are signicant clues for diagnosis of ASD. These repetitive behaviors, which is called self-stimulating behaviors, are described as flapping arms like wings, shaking head back and forth, and spinning around itself. Physicians should observe and examine these selfstimulating behaviors by interacting with children for a long time that makes it difficult in early diagnosis of ASD. In this paper, the self-stimulating behaviors of ASD children are examined using deep learning algorithms. For this purpose, a new video dataset recorded by parents in daily environment without being dependent on the hospital environment are created. Video features are extracted using 3DCNN and ConvLSTM deep learning algorithms. Softmax regression is applied as a classifier. As a result of the experiments performed, 75,93 % accuracy is obtained even if the videos are recorded in the daily environment. © 2020 IEEE.Article Selected Three Frame Difference Method for Moving Object Detection(Ismail Saritas, 2021) Simsek, E.; Ozyer, B.Three frame difference is one of the well-known methods that is used to perform moving object detection. According to the theory, the presence of a moving object is estimated by subtracting consecutive three image frames that provide moving object edges. However, these edges do not give complete information of the moving object which means that the method leads to loss of information. Some post-processing methods such as morphological operations, optical flow and combining these techniques are necessary to be apply for obtaining complete information of moving object. In this paper, we present a new approach called Selected Three Frame Difference (STFD) to detect moving object in video sequences without any post processing operations. We initially propose an algorithm that selects three images considering the local maximum value of frame differences. Instead of using consecutive three frames, these three selected image differences that include non-overlapping object frames are applied to the logical and operator. We mathematically prove that the entire moving object is always detected in the second selected image. We analyzed the proposed method on public benchmark dataset and the dataset collected from our laboratory. To validate the performance of our approach, we also compared with three frame difference method and background subtraction based traditional moving object detection methods on a few sample videos selected from different datasets. © 2021, Ismail Saritas. All rights reserved.

