Mechatronics Projects
-
-
Project Description: Conception and Validation of an Electric
Wheelchair Digital Twin
This project focused on enhancing the realism of an electric wheelchair simulator by developing and validating a digital twin. The primary objective was to create a highly accurate virtual model, calibrated with real-world data, that faithfully reproduces the physical behavior of a real wheelchair. This precision is essential for demanding applications such as powerchair football, where maneuver accuracy is critical.
Key elements of the project include:- Analytical Modeling with Matlab/Simulink: I developed a fundamental mass-spring-damper analytical model in Matlab/Simulink to represent the wheelchair as a dynamic vertical system. This model was used to simulate and analyze the chair's response to perturbations (e.g., crossing an obstacle), forming the theoretical basis for defining essential dynamic parameters like position, rotation, and pitch.
- Digital Twin Development in Unreal Engine 5: I created a virtual wheelchair within Unreal Engine 5. This involved optimizing a 3D model in 3ds Max, importing it as a Skeletal Mesh, and configuring realistic physical properties (mass, inertia, constraints) for its chassis, drive wheels, and caster wheels. The wheelchair's control logic was programmed using Blueprints and reinforced with C++ in Visual Studio to enable real-time parameter extraction (position, rotation), resulting in an interactive simulator responsive to user inputs and physical constraints.
- Experimental Validation with Qualisys: To validate the model against reality, I conducted experiments using a Qualisys motion capture system (12 infrared cameras). A real wheelchair, fitted with reflective markers, had its dynamics precisely recorded during obstacle crossing (e.g., height variations, pitch, rotations). Simultaneously, data from the Unreal simulator (position, rotation) was extracted in real-time via a custom C++ module and saved, providing synchronized real-world and virtual data sources.
- Data Superposition and Analysis in Matlab: I developed a dedicated Matlab application to compare the real and simulated data. This tool allowed for the simultaneous import and display of curves (e.g., Z-height evolution, pitch angle) from both Unreal and Qualisys, highlighting discrepancies such as high-frequency vibrations present in the real chair but absent in the initial model. An optimization function was integrated to identify physical parameters (stiffness, damping, inertia) that better matched reality, making Matlab central to validating and refining the simulator.
- Optimization Loop (Core of the Project): The core of this work was an iterative optimization loop. Discrepancies between real and simulated data were analyzed in Matlab, an algorithm automatically adjusted the model's physical parameters (e.g., stiffness, damping, inertia, mass), and these corrected parameters were then reinjected into Unreal Engine for a new simulation. This cyclical process, repeating the simulation and comparison, progressively enabled the digital twin to converge towards the actual behavior of the wheelchair.
This project successfully delivered an experimentally validated digital twin simulator that is more faithful in its dynamic reactions, capable of reproducing real wheelchair behaviors, and ready for concrete applications like training powerchair football athletes in an immersive environment. It provided invaluable experience across multiple engineering disciplines, including dynamic system modeling, real-time simulation development in game engines, experimental validation with motion capture, and data analysis and optimization.
-
-
Project Description: Automated Sorting System with Delta Robot and
Computer Vision
This project involved the design, fabrication, and programming of an automated sorting system. At its heart is a custom-built Delta robot, chosen for its speed and precision, which is commanded using computer vision for object identification and manipulation. The system demonstrates a comprehensive approach from mechanical conception to integrated electronic control.
Key elements of the project include:- Mechanical Design and Fabrication: I designed the robot's unique parallel architecture using SolidWorks. Following the digital design, I fabricated the physical structure and components in a workshop.
- Hardware Platform: The robot is controlled by a Raspberry Pi 3. I used Dynamixel motors, known for their accuracy and feedback capabilities, to drive the robot's arms.
- Kinematic Modeling and Control: I derived and implemented the inverse geometric model of the Delta robot. This model allows the Raspberry Pi to calculate the necessary joint angles for the Dynamixel motors based on the desired Cartesian coordinates of the object the robot needs to pick or place.
- Computer Vision Integration: I integrated computer vision into the system. This enables the robot to process camera input, detect specific objects, and determine their positions in the workspace.
- Software Development and Remote Control: I developed the control software on the Raspberry Pi to link the computer vision data with the kinematic calculations and command the Dynamixel motors. I also implemented functionality for remote control, allowing the robot to be operated or monitored by connecting to the same network as the Raspberry Pi.
This project provided invaluable experience across multiple engineering disciplines: mechanical design (CAD and fabrication), embedded systems programming (Raspberry Pi), robotics kinematics and control, advanced motor control (Dynamixel), computer vision, and network communication. It showcases my ability to build and program a complex robotic system from the ground up to perform an autonomous task like sorting.
-
-
Project Description: Autonomous Mobile Robot with Line-Following
and Obstacle Detection
This project, completed as part of a semester project for the Master’s program in Mechatronics, focuses on the design, simulation, and implementation of an autonomous mobile robot. The robot is equipped with advanced features for line-following, obstacle detection, and remote control capabilities.
Key elements of the project include:- Mechanical Design: Chassis and component modeling using SolidWorks.
- Electronic Integration: Circuit design simulated on Proteus, integrating an Arduino board, ultrasonic sensors for obstacle detection, and infrared sensors for line-following.
- Software Development: Implementation of control logic using C++ and Arduino IDE to achieve autonomous and remote-controlled operations.
- Simulation: Flowchart-based system simulation to validate control algorithms and logic.
This project demonstrates expertise in robotics, from mechanical design to electronic integration and control, showcasing real-world applications in automation and navigation.
-
-
Project Description: Ultrasonic Radar Proximity Detector with WPF
Alarm Control
This project implements a complete proximity detection and alarm system: an Arduino reads distance data from an HC-SR04 ultrasonic sensor, computes whether an obstacle is within a danger zone, and streams formatted messages over serial. A C# WPF application receives these messages, displays real-time distance and system status, and plays a looping alarm sound when an object is too close.
Key elements of the project include:-
Arduino Sensor Module: • HC-SR04 ultrasonic sensor on pins
9 (TRIG) and 10 (ECHO). • Measure pulse width with
pulseIn()to compute distance in cm. • Send messages “D:<distance>” and “ALARM:ON/OFF” via Serial @ 9600 baud. -
Serial Communication: • C# uses
System.IO.Ports.SerialPortto open COM port and read lines asynchronously. • DataReceived handler parses lines, updates UI labels and indicators accordingly. -
WPF GUI (C#): • MVVM-style XAML layout with ComboBox for
ports, Connect/Refresh buttons, and panels for radar & alarm
status. •
Dispatcher.Invoketo marshal UI updates from the background thread. -
Audible Alarm: •
MediaPlayerplays a looping MP3 when alarm is active. • Graceful handling of missing or load errors, with user notifications. - State Management: • Threshold constant (50 cm) defines danger zone. • Visual cues: colored Ellipses and TextBlocks for “Zone libre” vs. “Objet détecté” vs. “Alarme active”. • Clean connect/disconnect logic, proper cleanup on window closing.
This project demonstrates end-to-end integration of embedded sensor hardware with a rich desktop interface, covering real-time serial data parsing, thread-safe UI updates, multimedia playback, and robust error handling.
-
Arduino Sensor Module: • HC-SR04 ultrasonic sensor on pins
9 (TRIG) and 10 (ECHO). • Measure pulse width with
-
-
Project Description: Computer Vision-Based Traffic Flow Analysis
and Vehicle Counting using Python
This project involves the development of a real-time traffic monitoring system capable of detecting, tracking, and counting vehicles in a video stream. Built with Python, the system utilizes state-of-the-art deep learning models and computer vision techniques to analyze traffic density and flow. The application provides an automated solution for infrastructure monitoring and urban planning data collection.
Key functionalities include:- Object Detection with YOLOv8: Integration of the Ultralytics YOLOv8 (You Only Look Once) model to accurately identify various vehicle classes, such as cars, trucks, and buses, even in complex outdoor environments.
- Multi-Object Tracking (MOT): Implementation of tracking algorithms to assign unique IDs to each detected vehicle. This ensures that individual objects are followed across frames, preventing multiple counts for the same vehicle.
- Virtual Line Crossing Logic: Development of a mathematical "trigger line" within the video frame. The system automatically increments the counter only when a vehicle's centroid crosses this specific coordinate, enabling precise flow measurement.
- Real-Time Visual Analytics: Dynamic rendering of bounding boxes, tracking IDs, and an on-screen counter overlay, providing immediate visual feedback on the detection and counting process.
- Performance Optimization: Utilization of OpenCV for efficient video frame processing and model inference, ensuring the system can operate at high frame rates suitable for real-time surveillance.
This project demonstrates proficiency in Python programming, deep learning integration, and the practical application of computer vision to solve real-world problems. It showcases the ability to process unstructured video data into actionable traffic insights.
-
-
Project Description: Smart Parking Management and Occupancy
Detection System using Python and Computer Vision
This project focuses on the development of an automated parking monitoring system designed to detect and manage the occupancy of parking spaces in real-time. Built with Python and OpenCV, the system processes aerial video feeds to provide a dynamic digital overview of available spots, optimizing parking lot management and enhancing user experience.
Key functionalities include:- Interactive ROI Configuration: Implementation of a custom configuration tool that allows users to manually define Regions of Interest (ROIs) for each parking spot using mouse events. These coordinates are stored (via Pickle) for persistent use.
- Advanced Image Pre-processing: Utilization of a processing pipeline including grayscale conversion, Gaussian blurring, and adaptive thresholding to isolate vehicle signatures from the pavement background under varying lighting conditions.
- Real-Time Occupancy Analysis: The system counts non-zero pixels within each defined ROI. By setting a specific density threshold, it automatically determines if a spot is "Free" or "Occupied."
- Dynamic Visual Dashboard: A real-time overlay displays the status of each spot directly on the video feed—using green rectangles for available spaces and red for occupied ones—along with a live counter (e.g., "LIBRE: 3 / 4").
- Efficient Data Handling: Use of NumPy for fast array manipulations and localized pixel counting, ensuring the system can monitor large parking lots with minimal latency.
This project demonstrates proficiency in Python programming, image processing algorithms, and interactive software design. It showcases the practical application of computer vision to solve logistical challenges in smart city infrastructure and urban mobility.
-
-
Project Description: Computer Vision-Based Industrial Bottle
Counting and Sorting System using Python
This project focuses on the development of an automated computer vision system designed for high-speed industrial sorting and inventory tracking. Using Python and deep learning models, the system identifies, tracks, and counts different colored bottles (Blue, Red, and Yellow) as they move along a conveyor belt. This application simulates a critical quality control and logistics task common in the pharmaceutical, food, and beverage industries.
Key functionalities include:- Advanced Object Classification: Integration of a YOLOv8-based model trained to distinguish between specific product variants based on color and shape, ensuring accurate sorting even in dynamic industrial environments.
- Real-Time Multi-Object Tracking: Implementation of tracking logic to follow each individual bottle as it moves across the frame. This ensures that each object is uniquely identified, preventing duplicate counts as the conveyor advances.
- Trigger-Line Counting Mechanism: Development of a virtual detection line (vertical trigger). The system logic increments the specific color counter only when the centroid of a bottle crosses this line, providing precise throughput measurements.
- Interactive Data Overlay: Real-time rendering of a user dashboard directly on the video feed, displaying individual counts for each color class (Blue, Red, Yellow) as well as the total production count.
- High-Performance Image Processing: Utilization of OpenCV for efficient frame capture and pre-processing, allowing the system to maintain a high frame rate essential for real-time monitoring of industrial conveyors.
This project demonstrates a strong proficiency in Python programming, Deep Learning integration, and the practical application of computer vision for industrial automation. It highlights the ability to transform raw video data into actionable production insights and automated inventory management.
-
-
Project Description: Industrial Quality Control and Surface Defect
Detection using Deep Learning
This project involves the development of an automated visual inspection system designed to achieve a "Zero Defect" standard in manufacturing. Using Python, TensorFlow, and Keras, the system employs advanced Deep Learning models (such as Autoencoders or CNNs) to detect subtle surface anomalies, such as scratches or cracks, on industrial metal parts. This application provides a consistent and scalable alternative to manual human inspection.
Key functionalities include:- Anomaly Detection Modeling: Implementation of a deep learning architecture trained primarily on "good" samples to learn the nominal features of a perfect part. This allows the model to identify any deviation from the norm as a potential defect.
- Surface Defect Identification: The system is capable of detecting fine surface irregularities, localized scratches, and structural flaws that might be missed by traditional rule-based vision systems.
- Real-Time Classification: Processing of live video or high-resolution images to provide instantaneous "OK" or "DEFECT" status for each component on the production line.
- Visual Inspection Dashboard: Dynamic rendering of results with bounding boxes—green for passed parts and red for defective ones—along with loss-based confidence metrics for transparent quality assessment.
- Integrated AI Pipeline: A complete workflow from dataset preparation and model training to real-time inference, leveraging the computational efficiency of the TensorFlow and OpenCV libraries.
This project showcases a strong proficiency in Deep Learning, Computer Vision, and Python-based software development. It highlights the ability to implement AI-driven solutions to improve manufacturing quality, reduce waste, and optimize industrial production cycles.
-
-
Project Description: Mobile Robot Navigation and Real-Time GPS
Tracking Simulation in Webots
This project involves the design, programming, and simulation of an autonomous mobile robot within the Webots robotics simulation environment. The core objective is to develop a functional robotic platform capable of executing predefined trajectories while simultaneously capturing and logging its geospatial coordinates using a virtual GPS sensor. This project provides a practical foundation for developing autonomous navigation and localization algorithms.
Key functionalities include:- 3D Environment and Robot Modeling: Construction of a virtual test environment and a 4-wheeled mobile robot platform within Webots, incorporating realistic physical properties and sensor placement.
- Python-Based Controller Development: Implementation of the robot's logic using a Python controller. This involves managing motor velocities, calculating motion paths, and handling real-time sensor feedback.
- GPS Sensor Integration: Utilization of a virtual GPS node to continuously monitor the robot's 3D position (X, Y, Z). This demonstrates the process of data extraction for trajectory analysis and localization.
- Real-Time Data Logging: Live streaming of the robot's coordinates to the simulation console, providing immediate feedback on the robot's displacement and the accuracy of its intended path.
- Trajectory Simulation and Analysis: Validation of the robot's motion through various navigational patterns, ensuring smooth transitions and consistent data capture across different terrains.
This project showcases a strong understanding of autonomous mobile robotics, sensor integration, and virtual prototyping. It highlights proficiency in using specialized simulation software (Webots) and Python programming to design and test complex robotic behaviors before physical deployment.
-
-
Project Description: Autonomous Mobile Robot with Reactive
Navigation and Obstacle Avoidance in Webots
This project involves the development and simulation of an autonomous mobile robot programmed for reactive navigation within a complex environment. Built in Webots, the robot is equipped with a suite of proximity sensors and a custom navigation algorithm that allows it to explore its surroundings while dynamically avoiding collisions with static obstacles. This project demonstrates the implementation of core autonomous behaviors using sensor-based feedback.
Key functionalities include:- Reactive Navigation Logic: Implementation of a Braitenberg-inspired navigation algorithm. The robot's motor velocities are dynamically adjusted in real-time based on proximity sensor readings, enabling it to "steer away" from obstacles as it encounters them.
- Proximity Sensor Array: Integration of multiple infrared (IR) or ultrasonic sensor nodes around the robot's chassis. Each sensor provides localized distance data, allowing the robot to map its immediate surroundings and detect obstacles from various angles.
- Autonomous Exploration: The robot is designed to operate without pre-mapped path data. It relies solely on its onboard sensors and reactive logic to traverse the 3D environment, demonstrating a robust collision-free exploration capability.
- Dynamic Simulation Environment: Construction of a virtual world in Webots featuring various obstacles (walls, blocks). The simulation validates the robot's ability to handle tight spaces and complex geometry.
- Python-Based Controller: The robot's behavior is controlled by a Python script that processes sensor inputs and executes motor commands at every simulation step, ensuring high responsiveness to environmental changes.
This project showcases a strong understanding of autonomous robotics, sensor fusion, and reactive control systems. It highlights proficiency in using specialized simulation software (Webots) and Python to develop and test robust navigation behaviors for mobile robotic platforms.
-
-
Project Description: Autonomous Robot Navigation and Mapping using
LiDAR Sensor in Webots
This project focuses on the development and simulation of an autonomous mobile robot utilizing a LiDAR (Light Detection and Ranging) sensor for advanced environmental sensing and navigation. Built within the Webots environment, the robot is programmed to perceive its surroundings with high angular resolution, enabling it to navigate safely and efficiently through static and dynamic obstacles. This simulation provides a high-fidelity environment for developing perception and obstacle-avoidance algorithms.
Key functionalities include:- LiDAR Sensor Integration: Deployment of a virtual LiDAR node that emits laser pulses to measure distances in a 360-degree field of view. The system captures high-resolution "point cloud" data used to construct a top-down representation of the environment.
- Real-Time Data Visualization: Implementation of a dynamic "LiDAR plot" using Matplotlib. This visual dashboard displays the live scan data, showing obstacle proximity and orientation relative to the robot's heading.
- Autonomous Navigation and Collision Avoidance: Development of a navigation algorithm that processes LiDAR scan data to identify safe passage zones. The robot dynamically adjusts its speed and steering to maintain a safe buffer from all detected objects.
- Python-Based Controller: The robot's intelligence is managed by a Python script that interfaces with the Webots API, processing high-volume LiDAR data streams and executing real-time motor commands.
- High-Fidelity Virtual World: Construction of a virtual testing ground with various wall structures and corridors, providing a realistic environment to validate the accuracy and responsiveness of the LiDAR-based control system.
This project demonstrates proficiency in sensor-driven robotics, data visualization, and autonomous systems. It highlights the ability to use specialized simulation tools (Webots) and Python to implement advanced perception algorithms for mobile robotic platforms.