Jl. Raya ITS Sukolilo, Surabaya, Indonesia
Kembali ke Publikasi

Prof. Dadet Pramadihanto, Ph.D.

NIP. 196202111988111001

Daftar Publikasi
Filter
19-09-2022

Mechanical Design and Forward Kinematics Analysis of T-FLoW 3.0 Prosthetic Robot Hand: Lever-based Finger Movement Mechanism

K. I. Apriandy, B. Sena Bayu Dewantara, R. S. Dewanto, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2022

Keywords : T-FLoW 3.0 robot’s hand development, prosthetic robot hand, mechanical design, lever-based finger movement mechanism, 3D printing, SG92R micro-servo, forward kinematics analysis, static structural analysis

In this research, a prosthetic robot hand that features a lever-based finger movement mechanism is proposed as the new approach to the T-FLoW 3.0 humanoid robot’s hand development. The proposed approach performs both grasping and releasing movements by pushing or pulling the finger-attached lever. The lever is pushed or pulled by micro-servo, which uses a stiff bar to transfer the force from the servo-horn to the finger’s lever. Our prosthetic robot hand is equipped with six joints, six SG92R micro-servos as actuators, and six force-sensitive resistors (FSR) as grasping feedback. 3D printing manufacturing technology is utilized to give the hand a realistic appearance, and PLA filament material is used in the manufacturing process to provide low-cost, lightweight, and easy maintenance. Static structural analysis simulation result lead to the conclusion that our prosthetic robot hand could sustain a load of around 30N. With the lever-based finger movement mechanism, the proposed approach is expected to overcome the mechanical slip issues from finger movements, which are often experienced in the old approach of the T-FLoW 3.0 humanoid robot’s hand development.

Open Link
08-11-2021

Walking Gait Learning for “T-FLoW” Humanoid Robot Using Rule-Based Learning

F. Ulurrasyadi, R. S. Dewanto, A. Barakbah, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : T-FLoW, humanoid robot, learning, walking gait, CoppeliaSim

This work presents a fast and simple learning algorithm for humanoid robot walking gait cases. The standard method of reinforcement learning takes too much time to learn a stable walking gait. Thus, we propose a rule-based learning method that has never been used in this kind of walking gait learning case. We implement our method in a simplified TFLoW humanoid robot model in simulation software CoppeliaSim. The result shows by using our proposed method, T-FLoW humanoid robot can walk for 200 steps after taking the learning process for about 800 episodes and has a better walking performance than the classical pattern generation for planning a walking gait motion.

Open Link
08-11-2021

Forward Kinematics with Full-Arm Analysis on “T-FLoW” 3.0 Humanoid Robot

W. Dewandhana, K. I. Apriandy, B. S. B. Dewantara, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : Kinematics, T-FloW Humanoid Robot, homogeneous matrices, Forward Kinematics

This paper develops and analyzes a set of arm and hand mechanical system of the T-FLoW Humanoid Robot, which consists of a 7 Degree of Freedom (DoF) Arm and a 6 Degree of Freedom (DoF) Hand. With Kinematic calculations, mathematical modeling of the arm can be obtained using rotational matrices and translational matrices based on the rotational frame at each joint of the robotic arm and hand. Forward Kinematic (FK) analysis requires a combination of homogeneous matrices obtained from the rotation frame of each joint and the distance of each joints. From the results of Forward Kinematic, it can be used as a robot modeling in Matlab visualization by comparing robot’s hand and arm model on V-REP so that the original pose of the arm and hand of the Humanoid T-FLoW robot can be known.

Open Link
08-11-2021

Improved Damped Least Squares Inverse Kinematics with Joint limits for 7-DOF “T-FLoW” Humanoid Robot Manipulator

M. R. H. Setyawan, S. Dewanto, B. S. Marta, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : Inverse Kinematics, Damped Least Squares Method, Joint Limits, Redundant Manipulator, Humanoid robot

The manipulator robot on the humanoid robot has functioned as an arm to grasp objects. The end-effector position of the robot is must first be known to perform the grasping task. Therefore, using the kinematics solution to find the robot end-effector position in the Cartesian space. This research paper presents the inverse kinematics of the 7-DOF T-FLoW humanoid robot manipulator using the Improved Damped Least Squares method with joint limits to avoid mechanical limitations. Forward Kinematics with the Homogeneous Transformation Matrix is used in the solution to find the current position of the end-effector in the Cartesian space. This research using the DLS method because it can avoid kinematic singularities and performs better than pseudoinverse based formulations. The experiment results show that the improved solution is more robust in performing joint limitation with a success rate of 100% and generating more natural motion than the original DLS.

Open Link
08-11-2021

3D Object Detection and Recognition based on RGBD Images for Healthcare Robot

I. Birri, B. S. B. Dewantara, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : 3D, Object, Detection, Recognition, Pointcloud, RGBD

Lately, during the COVID-19 pandemic, hospitals experienced an increase in the number of patients due to the rapid spread of the virus. The need for services in hospitals has increased compared to normal days. Therefore Healthcare Robot is needed that can help the service of patients and medical personnel in the hospital. The robot must be able to detect and recognize existing objects and put them in the expected place. The sensor itself here uses a camera depth or stereo camera. The input results are in the form of RGB-D Image, which we then convert to point cloud to get 3D information. Then the 3D information will be segmented and clustered to get the object to be detected using a RANSAC and Euclidean Cluster. Then feature extraction uses the Viewpoint Features Histogram (VFH) descriptor to get the characteristics of the object. Then the matching with the dataset using the Artificial Neural Network continued with Labelling and visualization of the result. With this system, the robot can detect and recognize objects around the hospital so that the robot can take action on these objects. At the end of this project, nine datasets and three scenes resulting from capture by the writer were tested. The results show an average accuracy of 90.77% for testing three multi-object scenes and 98.73% for testing one object.

Open Link
08-11-2021

Fuzzy Social Force Model for Healthcare Robot Navigation and Obstacle Avoidance

A. T. Rifqi, B. S. B. Dewantara, D. Pramadihanto, B. S. Marta,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : Autonomous Navigation, Object Detection, Fuzzy Inference System, Social Force Model

Autonomous navigation is one of the important functions of the Healthcare Robot to produce obstacle-free movements in the social environment inhabited by humans. In carrying out its duties, the robot will do a lot of navigation from the origin to the destination. Healthcare Robot uses a Laser Range Finder to detect objects around the robot. The results of detection are distance and angle data from the object. Then, the data is used as input for the Fuzzy Inference System (FIS) process to produce an appropriate gain value to control static and dynamic force of the Social Force Model (SFM). The parameters of the SFM influence the robot's response to the detected object. To obtain the optimal gain value, the FIS is used to change the parameters adaptively. Adaptive parameters are used to prevent the robot from making unexpected navigational behavior that may be dangerous, threatening to others, and potentially self-destructive. From the tests carried out in two conditions, the robot successfully navigated from its initial position to the goal and was able to respond to objects around it with the percentage of success in all scenes was 79.9%.

Open Link
08-11-2021

Detecting Human Attendance using 1-Dimensional Foot Signal from Laser Range Sensor

M. D. G. P. Malik, D. Pramadihanto, B. S. B. Dewantara,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : laser range finder, support vector machine, sliding window, pyramid scanning, human foot signal

Detecting dynamic and static objects is one of the important abilities for a mobile robot, including a healthcare mobile robot. As long as the robot carries out its duties, it will often encounter objects, both dynamic and static. Therefore, recognizing the dynamic and static objects is crucial. In this paper, the existence of human as an example of dynamic objects is obtained using a Laser Range Finder (LRF), since it can work faster than ordinary cameras. To recognize human data obtained from LRF, a sliding window process is applied to get the signal data of human feet which will then be classified using the Support Vector Machine (SVM). Meanwhile, to overcome the difference in the size of the human foot signal caused by the distance changes, a pyramid scanning process is also applied. Based on the experimental results obtained, human can be detected from a range of distances 1 up to 4.25 meter within 73 msec.

Open Link
08-11-2021

FLoW-Vision: Object Recognition and Pose Estimation System based on Three-Dimensional (3D) Computer Vision

V. C. P. H. Putra, K. I. Apriandy, D. Pramadihanto, A. R. Barakbah,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : Computer Vision, 3D Image Processing, 3D Object Detection, RGB-D Image, Point Cloud.

This paper presents a three-dimensional computer vision-based object recognition on FLoW-Vision in RoISC (formerly ER2C) has entered its second phase. Previously, the robot had a basic vision that was used to replicate ‘human-like’ visual skills using 2D computer vision. As a result of the above discussion, we proposed the design and implementation of an object recognition and pose estimation system based on three-dimensional computer vision to handle object recognition and pose estimation tasks in real-world environments simultaneously. In the object recognition process, a point-cloud segmentation method is used to obtain possible object clusters before starting the calculation of feature description. Then, a keypoints-based two-stage matching process is performed to speed up the computation of finding correspondences between the object clusters of the current scene and a colored point cloud model of an object. Next, a Hough voting algorithm is employed to filter out matching errors in the correspondence set and estimate the initial 3D pose of the object. Last process process the pose estimation from clustered object using RANSAC to search the largest surface as Z surface. Experimental validate the object recognition can work correctly with percentage 100% and pose estimation accuracy of the proposed system can work correctly with percentage 60% in a complex real-world scene.

Open Link
08-11-2021

Wall Following and Obstacle Avoidance Control in Roisc-v1.0 (Robotic Disinfectant) using Behavior Based Control

Y. Sadewa, E. H. Binugroho, N. Hanafi, I. Dadet Pramadihanto, A. Fauzi, A. Purwanto,

Publisher : IEEE
Tahun Publikasi : 2021

Keywords : Behavior-Based Control, Obstacle Avoidance, Omniwheel Mobile Robot, Potential Field, Wall Following.

A robot that can move independently is an essential aspect towards replacing humans in hazardous work conditions. This paper shows the development of a mobile robot called ROISC-v1.0 (Robotic Disinfectant), which executes the sterilizing procedure in the room using UV light with a wavelength of 222nm. The goal of this study is to create a wall-following navigation system with obstacle avoidance capabilities. The modeling of the behavior-based control method is used in the application of a navigation system, including wall following and obstacle avoidance so that the mobile robot can modify linear and angular speeds based on the course of motion. Behavior-based control is used to eliminate the robot's reliance on its work area conditioning. To identify the distance between the robot and the wall, as well as the existence of obstacles in the robot's work area, the ROISC-v1.0 robot uses array of 12 Lidar sensors type VL53L0X. As a result, the robot navigates successfully to follow the contours of the wall and avoiding static and dynamic obstacles. When there are no obstacles in the way, the ROISC-v1.0 robot can perform optimally and efficiently in a 4.5m x 2.8m work area with an average robot travel time of 73.4 seconds and an average robot distance of 880.4 cm. With an average travel time of 109 seconds and a distance of 1052.4 cm, the robot can perform optimally and efficiently in regions where there are obstacles. The VL53L0X ToF sensor, which uses light waves in the transmission process and has an average inaccuracy of 0.7cm, allows the robot to read bright objects more accurately. The ROISC-v1.0 robot is hoped to aid medical professionals as the result of this research, minimizing the impact of virus dissemination caused by the sterilization process.

Open Link
20-10-2020

Human Partner and Robot Guide Coordination System Under Social Force Model Framework Using Kinect Sensor

H. M. Mu’allimi, B. Sena Bayu Dewantara, D. Pramadihanto, B. S. Marta,

Publisher : IEEE
Tahun Publikasi : 2020

Keywords : robot guide, Kinect sensor, target partner, orientation, social force model

A robot guide is a robot that is used to guide users from a place of origin to their destination. During carrying out the guiding task, the robot must ensure that the user always follows wherever the direction is headed by the robot until it reaches the destination location. To get this certainty, one of the things that must be considered is the direction the user must walk in the direction of the robot's movement. In this paper, we built a system to monitor user awareness levels to coordinate with robots. We use RGB-D data from the Kinect sensor to detect target partners, their position, and orientation. The level of awareness is calculated using the Social Force Model (SFM) based on the target partner's position and body orientation parameters. This level of awareness will be used by the robot to evaluate the appropriate actions according to the target partner's activities. Based on the results of experiments, the level of awareness of target partners can be calculated and transformed in the form of attractive or repulsive forces towards robots.

Open Link
20-10-2020

Secure Communication System of Drone Service using Hybrid Cryptography over 4G/LTE Network

F. Ronaldo, D. Pramadihanto, A. Sudarsono,

Publisher : IEEE
Tahun Publikasi : 2020

Keywords : Drone Service, Secure Communication, Lightweight Hybrid Cryptography, Internet of Drone, Cloud

Drone service is an Unmanned Aerial Vehicle (UAV) whose function is to deliver goods using air routes. Drone services sometimes have to travel long distances with a guaranteed security system to get to the customer's location and return safely to the base. But in fact, in the use of an insecure Internet of Drone (IoD) environment, there will be many attacks that try to manipulate the approximate location or interrupt transmission of data from fake nodes. Therefore, this paper focuses on a reliable data communication security system between drone services and server. Because the drone service uses a lightweight device, we propose a Hybrid cryptographic security scheme that has simple computing but a high level of security. It has layered encryption using AES- 256, ECC, and SHA256 which are provides high data authentication and encryption services for each node. The proposed scheme is efficient in terms of processing time so that it will not significantly affect the realtime sensor data from the drone. The experimental results show that the proposed system reaches 88.61 milliseconds to encrypt and decrypt messages using the Raspberry Fi device.

Open Link
11-08-2020

FFT-based Human Detection using 1-D Laser Range Data

B. S. Bayu Dewantara, S. Dhuha, B. S. Marta, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2020

Keywords : Fast Fourier Transform, Human legs, Laser Range Finder, Human detection, Support Vector Machine

In general, a socially-aware mobile robot must have an ability to safely navigate among human environment. To address with this competency, the mobile robots must be able to detect the existence of humans around. This paper proposes the use of Fast Fourier Transform (FFT) to analyze shape-models of human legs that is obtained from Laser Range Finder (LRF) scanning results. A 240° of LRF was used to capture and visualize the environment in one dimensional plane. The plane is then converted into one dimensional signal that consists of 1,024 data points. These data points represented the distance of all measured points. A specific set of points formed the pattern of human legs only is then resized into 32 data. This resized-data is transformed into frequency by using FFT. The result of FFT is then fed into Support Vector Machine (SVM) to be classified into two classes, they are human or not human. Based on the experimental results, our proposed method shows a promising result in order to detect human based on one dimensional feature of his legs by achieving more than 80% of accuracy.

Open Link
18-11-2019

2D Mapping and Localization using Laser Range Finder for Omnidirectional Mobile Robot

A. A. Kusumo, B. Sandi Marta, B. S. Bayu Dewantara, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2019

Keywords : ICP, SLAM, omnidirectional mobile robot, odometry, lidar

The following topics are dealt with: mobile robots; Internet; learning (artificial intelligence); object detection; cameras; position control; image segmentation; computer aided instruction; feature extraction; wireless sensor networks.

Open Link
18-11-2019

Motion Modeling of Traditional Javanese Dance: Introduction of Javanese Dancer Gesture with 3D Models

A. Nurindiyani, D. Pramadihanto, R. Afifah,

Publisher : IEEE
Tahun Publikasi : 2019

Keywords : Traditional Dance, Motion Capture, 3D skeleton.

This work proposes building knowledge for gesture 3D Modeling Javanese Dance by using motion capture technique from the original dancer. The capturing process will record all the rotation value of the dancing motion that represent by each skeleton. A sequence of data will created by system. For the result, we made skeleton of 3D model and get the data gesture using motion capture to build an interactive learning media of traditional Javanese Dance.

Open Link
18-11-2019

HOG-based Hand Gesture Recognition Using Kinect

K. N. Krisandria, B. S. B. Dewantara, D. Pramadihanto,

Publisher : IEEE
Tahun Publikasi : 2019

Keywords : hand gesture, Kinect, histogram of oriented gradient, dynamic time warping

One of the keys to the success of interaction between people is communication. Communication can be done verbally or non-verbally. In this paper, we build interactions between humans and computers using hand gestures. The hand gesture is recognized by the palm of the hand which is obtained from the results of human skeleton segmentation through camera Kinect. Recognition of palm gestures is performed on a series of RGB Kinect output frames. Histogram of Oriented Gradient (HOG) is used to produce a palm frame per frame gesture feature which is arranged in 4 seconds as a gesture descriptor. Dynamic Time Warping (DTW) is used as a classifier that will compare the description of the input gesture with the template gesture description. Based on the results of the experiment, the performance of the hand gesture recognition system reached 76.7%.

Open Link