<?xml version="1.0" encoding="UTF-8"?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>2023</title>
<link href="http://210.212.227.212:8080/xmlui/handle/123456789/500" rel="alternate"/>
<subtitle/>
<id>http://210.212.227.212:8080/xmlui/handle/123456789/500</id>
<updated>2026-05-17T00:01:37Z</updated>
<dc:date>2026-05-17T00:01:37Z</dc:date>
<entry>
<title>REAL TIME TRANSFORMER BASED OBJECT DETECTION USING YOLOv8</title>
<link href="http://210.212.227.212:8080/xmlui/handle/123456789/507" rel="alternate"/>
<author>
<name>Vishnu, V Nair</name>
</author>
<author>
<name>Thushara, A</name>
</author>
<id>http://210.212.227.212:8080/xmlui/handle/123456789/507</id>
<updated>2023-10-28T10:00:34Z</updated>
<published>2023-07-11T00:00:00Z</published>
<summary type="text">REAL TIME TRANSFORMER BASED OBJECT DETECTION USING YOLOv8
Vishnu, V Nair; Thushara, A
Real time Object detection is a computer vision task that involves identi fying and localizing objects of interest within an image or video. Many chal lenges need to be addressed in object detection, including occlusions, scale&#13;
variations, clutter in the background, deformations and variations of objects,&#13;
limited data, real-time processing demands, imbalanced classes, and the need&#13;
to adapt to new object categories. This project proposes a Transformer-based&#13;
object detection model to tackle the aforementioned challenges. The pro posed model utilizes Transformers, originally designed for natural language&#13;
processing, to address object detection challenges. The model leverages the&#13;
self-attention mechanism in Transformers for feature extraction rather than&#13;
relying on convolutional neural networks. This allows the models to effec tively capture global and local features and learn complex spatial relation ships between objects. Furthermore, the fully connected layers in the con ventional object detection method are replaced with a Transformer-based&#13;
detection head in the proposed models. This modification allows the model&#13;
to utilize the strengths of Transformers in processing the extracted features&#13;
and generating precise bounding box predictions. Also, the model can learn&#13;
complex object representations and handle object occlusion, scale variation,&#13;
and other challenging scenarios more effectively. This adaptation enhances&#13;
the model’s capability to detect and localize objects in various real-world ap plications accurately. The performance of the proposed Transformer-based&#13;
object detection model is evaluated through experiments on widely recognized&#13;
object detection benchmarks like COCO. Additionally, proprietary datasets&#13;
like Next wealth are used to gauge the model’s performance. The results of&#13;
these evaluations exhibit significant enhancements in metrics such as mean&#13;
average precision and localization accuracy compared to the other state-of art methods. The Transformer-based object detection models demonstrate&#13;
promising outcomes, showcasing improved accuracy and their capability to&#13;
handle challenging scenarios and complex object interactions effectively.
</summary>
<dc:date>2023-07-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>State of Charge Estimation for Lithium Ion Battery Based on Reinforcement Learning</title>
<link href="http://210.212.227.212:8080/xmlui/handle/123456789/506" rel="alternate"/>
<author>
<name>Ashna, K</name>
</author>
<author>
<name>Manu, J Pilla</name>
</author>
<id>http://210.212.227.212:8080/xmlui/handle/123456789/506</id>
<updated>2023-10-28T09:57:34Z</updated>
<published>2023-07-07T00:00:00Z</published>
<summary type="text">State of Charge Estimation for Lithium Ion Battery Based on Reinforcement Learning
Ashna, K; Manu, J Pilla
This project aims to develop a state-of-the-art approach for accurately estimating the&#13;
state of charge (SOC) of lithium-ion batteries by leveraging the combined power of rein forcement learning, Convolutional Neural Networks (CNNs), and Long Short-Term Memory&#13;
(LSTM) networks. The dataset used for this study is the BMW dataset, which comprises&#13;
real-world battery data collected from electric vehicles. The primary objective is to train&#13;
a reinforcement learning agent to learn optimal policies for SOC estimation through iter ative trial and error interactions with the battery system. By continuously exploring and&#13;
adapting its decision-making process, the agent can effectively estimate the SOC with high&#13;
accuracy and adaptability. To further enhance the SOC estimation process, CNNs are in corporated into the proposed framework. CNNs excel at extracting spatial features from&#13;
complex datasets, which is particularly useful in analyzing battery voltage data. By captur ing local patterns and variations in the battery response, the CNNs can effectively identify&#13;
critical features that contribute to accurate SOC estimation. Additionally, LSTM networks&#13;
are employed to model the temporal dependencies inherent in battery behavior. The LSTM&#13;
networks can effectively capture the dynamic nature of battery performance by analyzing&#13;
voltage and current data over time, enabling accurate SOC estimation even in varying oper ating conditions. Through comprehensive experiments and evaluations on the BMW dataset,&#13;
the proposed approach demonstrates superior performance compared to traditional SOC es timation methods. The reinforcement learning agent, in combination with CNNs and LSTM&#13;
networks, achieves high precision, adaptability, and robustness in estimating the SOC of&#13;
lithium-ion batteries. The project’s outcomes have significant implications for battery man agement systems, energy optimization, and prolonging the lifespan of lithium-ion batteries&#13;
in electric vehicle applications. By accurately monitoring and estimating the SOC, the pro posed approach contributes to more efficient and reliable battery usage, thereby improving&#13;
overall performance and addressing the challenges associated with battery degradation and&#13;
limited lifespan in electric vehicle technologies
</summary>
<dc:date>2023-07-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>A DEEP AUTO ENCODER WITH REINFORCEMENT LEARNING FOR ROTATING MACHINERY FAULT DIAGNOSIS</title>
<link href="http://210.212.227.212:8080/xmlui/handle/123456789/505" rel="alternate"/>
<author>
<name>Safana, F</name>
</author>
<author>
<name>Aneesh, G  Nath</name>
</author>
<id>http://210.212.227.212:8080/xmlui/handle/123456789/505</id>
<updated>2023-10-28T09:52:25Z</updated>
<published>2023-07-07T00:00:00Z</published>
<summary type="text">A DEEP AUTO ENCODER WITH REINFORCEMENT LEARNING FOR ROTATING MACHINERY FAULT DIAGNOSIS
Safana, F; Aneesh, G  Nath
Fault diagnosis in rotating machinery plays a crucial role in ensuring operational reliability&#13;
and safety.Rotating machinery plays a critical role in various industrial applications,but en suring its reliability and safety is of utmost importance.Fault diagnosis in rotating machinery&#13;
is a vital task that involves identifying and addressing potential issues to prevent catastrophic&#13;
accidents and enable effective maintenance.Traditional fault diagnosis methods have certain&#13;
limitations, such as manual analysis and limited accuracy. In recent years,deep learning tech niques have emerged as promising approaches for automating the fault diagnosis process.This&#13;
study proposes a novel approach for fault diagnosis in rotating machinery by combining deep&#13;
learning with reinforcement learning.The proposed method leverages a deep auto encoder&#13;
augmented with reinforcement learning techniques to improve the accuracy and effectiveness&#13;
of fault diagnosis.The deep auto encoder extracts relevant features by compressing input data&#13;
into a lower-dimensional representation and reconstructing the original input.This process&#13;
inherently performs feature extraction, capturing informative characteristics in the encoded&#13;
layer.Furthermore,reinforcement learning, specifically a deep Q network, is employed to en hance the accuracy of failure mode diagnosis.By continuously interacting with the datasets&#13;
and learning from the feedback received, the models can improve their diagnostic capabilities&#13;
and handle compound failures more effectively.The performance of the proposed approach&#13;
is evaluated using two real-world datasets, namely the CWRU and MAFAULDA datasets,&#13;
which cover different fault diagnosis and time series analysis scenarios.Various models, in cluding 1D CNN, LSTM, and GRU, are utilized to process the time series data and extract&#13;
meaningful features.The evaluation metrics used to assess the effectiveness of the trained&#13;
models include accuracy, precision, recall, and F1 score.Additionally, a confusion matrix and&#13;
a classification report are generated to provide comprehensive insights into the performance&#13;
of the models.The results demonstrate that the proposed approach,combining deep learning&#13;
with reinforcement learning,holds significant potential for accurate fault diagnosis in rotating&#13;
machinery.
</summary>
<dc:date>2023-07-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aerial Scene Classification using VGG16 and Multiclass Linear SVM</title>
<link href="http://210.212.227.212:8080/xmlui/handle/123456789/504" rel="alternate"/>
<author>
<name>Bintu, K Babu</name>
</author>
<author>
<name>Shyna, A</name>
</author>
<author>
<name>Jini, Raju</name>
</author>
<id>http://210.212.227.212:8080/xmlui/handle/123456789/504</id>
<updated>2023-10-28T09:45:47Z</updated>
<published>2023-07-07T00:00:00Z</published>
<summary type="text">Aerial Scene Classification using VGG16 and Multiclass Linear SVM
Bintu, K Babu; Shyna, A; Jini, Raju
Aerial scene classification is the process of categorizing and analyzing im ages captured from an aerial perspective, enabling the identification of land&#13;
cover, objects, and scene composition for various applications. Aerial scene&#13;
classification plays a crucial role in various fields, including urban planning,&#13;
environmental monitoring, and disaster management, by providing valuable&#13;
insights into land cover, objects, and scene composition from an aerial per spective. Accurate classification of aerial scenes enables effective decision making, resource allocation, and informed analysis of large-scale imagery,&#13;
contributing to improved spatial understanding and efficient management of&#13;
diverse landscapes. In this work, classification of aerial images using com bination of VGG16 and Multiclass Linear SVM classifier is proposed. Deep&#13;
Features are extracted using VGG16 and Multiclass Linear SVM classifier is&#13;
using to classify the given objects. The preprocessed steps include data aug mentation, data normalization, feature extraction using a VGG16 model, and&#13;
training a multiclass linear SVM classifier for aerial scene classification. The&#13;
experiments are conducted on NWPU and UCM dataset and performance is&#13;
evaluated using confusion matrix,precision and recall.The experimental result&#13;
shows the proposed method yield 90% accuracy for NWPU dataset and 95%&#13;
accuracy for UCM dataset.
</summary>
<dc:date>2023-07-07T00:00:00Z</dc:date>
</entry>
</feed>
