Browsing by Author "Cintas, Emre"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Article Analysing Headway Spacing and Calculating Passenger Car Equivalent Values Using Computer Vision and International Dataset(Sveučiliste U Zagrebu, Fakultet Prometnih Znanosti, 2025) Celik, Burak; Tortum, Ahmet; Cintas, Emre; Ozyer, BarisAccurate traffic flow data are crucial for effective transportation planning and management. Different vehicle types impact traffic flow variably, requiring distinct passenger car equivalency (PCE) factors for calculating intersection and road capacity. Headway and spacing data are essential to assess traffic density and service level. Conventional data collection methods are time-consuming and often inaccurate. Unlike existing studies, this study employed computer vision to measure mixed traffic stream volume in terms of passenger car equivalent and collect headway-spacing data with high accuracy. The vehicle detection and counting procedures provide the mandatory infrastructure for measuring mixed traffic stream volume and collecting headway and spacing data. Novel approaches were introduced to gather comprehensive traffic data, including passenger car equivalent values, headway, spacing, flow rate, vehicle speed and traffic volume, using a single system. A custom and comprehensive international dataset was collected to analyse these approaches. Our trained model achieved a mean average precision (mAP) of 97.4%, with accuracies of 95% for headway, 93% for spacing and 99% for PCE values. The dataset can be downloaded at https://github.com/burak-celik/atavehicledataset.Article Predicting Air Quality Index in Başakşehir, Istanbul with Hybrid AI Models: Unveiling Key Drivers Through Catboost-Based SHAP and Feature Importance Analysis(Springer Wien, 2025) Akiner, Muhammed Ernur; Katipoglu, Okan Mert; Cintas, EmreUrban air quality influences public health, ecosystem sustainability and economic productivity. This study focuses on predicting the Air Quality Index (AQI) in Ba & scedil;ak & scedil;ehir, Istanbul. The study proposes a hybrid artificial intelligence (AI) model that amalgamates Categorical Boosting (CatBoost), Shapley Additive Explanations (SHAP) and the feature importance analysis. The dataset encompasses various meteorological parameters, including Tempmax, Tempmin, Temp, Dew, Humidity, Precip, Windspeed, Sea level pressure, Cloud cover, Solar radiation, Solar energy and UV index, in addition to air quality parameters such as PM10, SO2, CO, NO2, NOX, NO and O3. These variables serve as inputs for models like ANN, BAT-ANN,- BBO-ANN, GWO-ANN, HCA-ANN, CatBoost and CNN; the intent is to enhance the accuracy of the AQI prediction. When the combined set of variables were employed as inputs, the most precise results emerged from the CNN model, which yielded an RMSE of 1.43, an AIC of 949.21 and an NSE and R2 of 0.99. The CatBoost model exhibited exceptional performance among the various input combinations, providing the most accurate results for these configurations. Non-parametric statistical Friedman and Nemenyi post-hoc tests were used for multi-model comparison, and it was concluded that there were significant performance differences between the models used according to the p-value values, both in general and based on pairs. While prior studies have explored hybrid AI models for AQI prediction, this study uniquely integrates CatBoost and SHAP for enhanced explainability and model performance evaluation. SHAP analysis provided transparent insights into variable contributions; however, PM10 emerged as the dominant predictor, achieving the highest mutual information score of 6.88. These findings underscore the importance of integrating pollutant and meteorological data. The proposed methodology aligns with global sustainability goals, including SDG 11 (Sustainable Cities and Communities) and SDG 13 (Climate Action).Article Vision-Based Moving UAV Tracking by Another UAV on Low-Cost Hardware and a New Ground Control Station(IEEE-Inst Electrical Electronics Engineers Inc, 2020) Cintas, Emre; Ozyer, Baris; Simsek, EmrahAutomatic flying target detection and tracking in video sequences acquired from a camera mounted on another Unmanned Aerial Vehicle (UAV) is a challenging task due to the presence of non-stationary cameras in the system, dynamic motion of the moving target, and high-cost computation for real-time applications. In this paper, our aim is to automatically detect and track moving UAV by another one while simultaneously flying in the air. In order to provide efficiently in real-time applications, we develop a vision-based low-cost hardware system integrated with an independent ground control station. We initially created a new public dataset called ATAUAV that includes different types of UAV images obtained from videos recording in our experiments and searches on Google Images for the training process. Deep learning-based YOLOv3-Tiny (You Only Look Once) is used for target detection with the highest accuracy and fastest results. Kernelized Correlation Filter (KCF) adapted with YOLO, which runs on low-cost hardware, is used for real-time detected target tracking. We compared the performance of the proposed approach with different tracking algorithms. Experimental results show that the proposed approach provides the highest accuracy rate as 82.7% and a mean fps speed as 29.6 on CPU. The dataset can be downloaded at http://cogvi.atauni.edu.tr/ResearchLab/PageDetail/Our-ATAUAVs-Dataset-86.Correction Vision-Based Moving UAV Tracking by Another UAV on Low-Cost Hardware and a New Ground Control Station (Vol 8, Pg 194601, 2020)(IEEE-Inst Electrical Electronics Engineers Inc, 2021) Cintas, Emre; Ozyer, Baris; Simsek, EmrahIn the above article [1], our aim was to automatically detect and track target moving Unmanned Aerial Vehicle (UAV) [4] by another UAV in real-time applications. For that purpose, we developed a vision-based low-cost hardware system and analyzed the tracking algorithm performances. In addition, we developed a ground control station that is integrated with the low-cost hardware system. We initially created a new public data set including different types of UAV images mostly recorded from experiments and search on Google images. The data set is available on the website linked as http://cogvi.atauni.edu.tr/ResearchLab/PageDetail/Our-ATAUAVs-Dataset-86 that is accessible to everyone.

