This project presents a robotic arm simulation controlled via natural language, utilizing a Large Language Model (LLM) to convert spoken commands into structured motion instructions. Spoken phrases such as “move up by 10 centimeters” are parsed into executable JSON commands, enabling real-time 3D movement of a virtual robotic arm. The simulation integrates speech recognition, natural language processing, and forward kinematics, providing an intuitive interface for exploring multimodal AI-based control systems.
An interactive simulation of a 3D robotic arm controlled by real-time hand tracking via webcam. This system uses MediaPipe and OpenCV to detect hand positions and interpret directional gestures (left, right, up, down) as movement commands. A graphical interface built with Tkinter and Matplotlib visualizes the arm’s response, enabling intuitive, vision-based control for human-robot interaction.
This project focuses on real-time object detection using the YOLOv9 deep learning model. The system captures live video through a camera, processing each frame to detect and classify various objects, such as persons, cell phones, books, and others. With YOLOv9's advanced capabilities, the system ensures fast, accurate identification, making it suitable for applications in surveillance, smart systems, and automated environments where immediate object recognition is required.
This project utilizes the YOLOv8 deep learning model for fire detection in video frames. The system processes recorded video footage to identify and locate fire outbreaks accurately. Trained on a comprehensive fire dataset, YOLOv8 ensures high precision in detecting fire in various environments, making it ideal for post-event analysis and safety monitoring applications.
This project implements a vehicle counting system in lanes using YOLOv8, a state-of-the-art object detection model. The system processes video footage to detect and track vehicles as they pass through predefined lanes. YOLOv8's high-speed and accurate detection capabilities ensure reliable vehicle counting, even in dynamic traffic conditions. This solution is ideal for traffic monitoring, smart city applications, and transportation management, providing real-time insights into vehicle flow and congestion levels.
This project utilizes YOLOv11 for vehicle speed detection through video analysis. The system captures video frames from surveillance cameras and processes them in real-time to detect and track vehicles. By analyzing the time it takes for a vehicle to pass between predefined points in the video, the system calculates the speed of each vehicle. YOLOv11's advanced object detection capabilities ensure accurate vehicle detection and tracking, making this solution ideal for traffic monitoring, law enforcement, and smart city applications to detect speeding vehicles in real-time.