مهندسی مکانیک مدرس

مهندسی مکانیک مدرس

یادگیری دست ربات 5 انگشتی با استفاده از یادگیری عمیق به منظور گرفتن پایدار

نوع مقاله : پژوهشی اصیل

نویسندگان
1 دانشگاه ملایر
2 دانشجوی کارشناسی ارد
3 دانشجوی کارشناسی ارشد
چکیده
دست انسان یکی از پیچیده‌ترین اندام‌های بدن انسان است که قادر به انجام وظایف ماهرانه می‌باشد. ماهرانه عمل کردن به ویژه گرفتن یک توانایی حیاتی برای ربات‌ها محسوب می‌شود. با این حال، گرفتنِ اشیاء توسط دست ربات یک مسئله چالش برانگیز است. بسیاری از محققان از روش‌های یادگیری عمیق و بینایی کامپیوتری برای حل این مسئله استفاده کرده‌اند. این مقاله یک دست ربات 5 درجه آزادی انسان‌نما را ارائه می‌دهد. دست رباتیک با استفاده از پرینتر سه بعدی ساخته شده و برای حرکت انگشتان از 5 سروو موتور استفاده می‌شود. به منظور سادگی دست رباتیک، سیستم انتقال مبتنی بر تاندون انتخاب شده که به انگشتان دست ربات اجازه خمش و کشش می‌دهد. هدف این مقاله استفاده از الگوریتم یادگیری عمیق برای گرفتن نیمه خودکار اشیای مختلف می‌باشد. در این راستا یک ساختار شبکه عصبی کانولوشن با بیش از 600 تصویر آموزش داده می‌شود. این تصاویر توسط یک دوربین نصب شده بر روی دست ربات جمع آوری شده است. سپس عملکرد این الگوریتم در شرایط مشابه روی اشیای مختلف آزمایش می‌شود. در نهایت دست رباتیک قادر به گرفتن موفقیت آمیز با دقت 85 درصد می‌باشد.
کلیدواژه‌ها

موضوعات


عنوان مقاله English

Learning of the 5-finger robot hand using deep learning for stable grasping

نویسندگان English

Hamidreza Heidari 1
Tayebeh Ghahri Saremi 2
Tahereh Ghahri Saremi 3
1 Associate Professor
2 Msc. Student
3 Msc. Student
چکیده English

The human hand is one of the most complex organs of the human body, capable of performing skilled tasks. Manipulation, especially grasping is a critical ability for robots. However, grasping objects by a robot hand is a challenging issue. Many researchers have used deep learning and computer vision methods to solve this problem. This paper presents a humanoid 5-degree-of-freedom robot hand for grasping objects. The robotic hand is made using a 3D printer and 5 servo motors are used to move the fingers. In order to simplify the robotic hand, a tendon-based transmission system was chosen that allows the robot's fingers to flexion and extension. The purpose of this article is to use deep learning algorithm to grasping different objects semi-automatically. In this regard, a convolutional neural network structure is trained with more than 600 images. These images were collected by a camera mounted on the robot's hand. Then, the performance of this algorithm is tested on different objects in similar conditions. Finally, the robot hand is able of successfully grasping with 85% accuracy.

کلیدواژه‌ها English

deep learning
Grasping
Convolutional Neural Network
Robot hand
Computer Vision
1- Schwarz RJ, Taylor C. The anatomy and mechanics of the human hand. 1955;2(2):22-35.
2- Wheatland N, Wang Y, Song H, Neff M, Zordan V, Jörg S. State of the art in hand and finger modeling and animation. in Computer Graphics Forum. 2015;34:735-760.
3- Iberall AR. A neural model of human prehension. 1987.
4- Ansuini C, Santello M, Massaccesi S, Castiello U. Effects of end-goal on hand shaping. Journal of neurophysiology. 2006;95(4):2456-2465.
5- Cutkosky MR. On grasp choice, grasp models, and the design of hands for manufacturing tasks. IEEE Transactions on robotics and automation. 1989;5(3):269-279.
6- Bicchi A, Marigo A. Dexterous grippers: Putting nonholonomy to work for fine manipulation. The International Journal of Robotics Research. 2002;21(5-6):427-442.
7- Alba D, Armada M, Ponticelli R. An introductory revision to humanoid robot hands, in Climbing
and walking robots. 2005; pp.701-712.
8- [Online]. Available: https://www.openbionics.com/shop/ada.
9- [Online]. Available: https://inmoov.fr/.
10- Mouri T, Endo T, Kawasaki H. Review of gifu hand and its application. Mechanics based design of structures and machines. 2011;39(2):210-228.
11- Butterfaß J, Grebenstein M, Liu H, Hirzinger G. DLR-Hand II: Next generation of a dextrous robot
hand. in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164). 2001;1:109-114.
12- Fischinger D, Weiss A, Vincze M. Learning grasps with topographic features. The International Journal of Robotics Research. 2015;34(9):1167-1194.
13- Lenz I, Lee H, Saxena A. Deep learning for detecting robotic grasps. The International Journal of
Robotics Research. 2015;34(4-5):705-724.
14- Nair V, Hinton GE. 3D object recognition with deep belief nets. Advances in neural information
processing systems. 2009;22.
15- LeCun Y, Bengio Y. Convolutional networks for images, speech, and time series. The handbook of
brain theory and neural networks. 1995;3361(10).
16- Deng L. A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA
transactions on Signal and Information Processing. 2014;3.
17- Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural
networks. Communications of the ACM. 2017;60(6):84-90.
18- Bishop CM. Neural networks and their applications. Review of scientific instruments. 1994;65(6):
1803-1832.
19- Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. rXiv preprint arXiv. 2014.
20- [Online]. https://www.geeksforgeeks.org/vgg-16-cnn-model/.
21- Gómez G, Hernandez A, Eggenberger Hotz P, Pfeifer R. An adaptive learning mechanism for teaching a robotic hand to grasp. in International symposium on adaptive motion of animals and
machines. 2005.
22- Levine S, Pastor P, Krizhevsky A, Ibarz J, Quillen D. Learning hand-eye coordination for robotic
grasping with deep learning and large-scale data collection. The International journal of robotics research. 2018;37(4-5):421-436.
23- Devaraja RR, Maskeliūnas R, Damaševičius R. Design and evaluation of anthropomorphic robotic
hand for object grasping and shape recognition. Computers. 2020;10(1).
24- McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. The bulletin
of mathematical biophysics. 1943;5(4):115-133.
25- Stergiou C, Siganos D. NEURAL NETWORKS. Surveys and Presentations in Information Systems
Engineering. SURPRISE 96 Journal. 2006:1-25.
26- Yoo HJ. Deep convolution neural networks in computer vision: a review. IEIE Transactions on
Smart Processing and Computing. 2015;4(1):35-43.
27- Liu M, Shi J, Li Z, Li C, Zhu J, Liu S. Towards better analysis of deep convolutional neural networks. IEEE transactions on visualization and computer graphics. 2016;23(1):91-100.
28- Mathworks: Introducing Deep Learning with MATLAB. 2017.
29- [Online]. https://en.wikipedia.org/wiki/Huber_loss.
30- Kootstra G, Popović M, Jørgensen JA, Kuklinski K, Miatliuk K, Kragic D, Krüger N. Enabling grasping of unknown objects through a synergistic use of edge and surface information. The international journal of robotics research, 2012;31(10):1190-1213.
31- Kopicki M, Detry R, Adjigble M, Stolkin R, Leonardis A, Wyatt JL. One-shot learning and generation of dexterous grasps for novel objects. The International Journal of Robotics Research. 2016;35(8):959-976.