학술논문

Development of a Vision Based Mapping in Rubber Tree Orchard
Document Type
Conference
Source
2018 International Conference on Engineering, Applied Sciences, and Technology (ICEAST) Engineering, Applied Sciences, and Technology (ICEAST), 2018 International Conference on. :1-4 Jul, 2018
Subject
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
General Topics for Engineers
Power, Energy and Industry Applications
Robotics and Control Systems
Cameras
Rubber
Vegetation
Meters
Production
Geometry
Pose estimation
Computer vision
Tree detection
Mapping
Rubber tree orchard
Language
Abstract
A mapping method for rubber tree orchard was developed using an image processing. Because of high labor cost and continuous dropped price of natural rubber in the past years, the rubber tree farmers are struggled to maintain profit and their productions. Automation technology in agriculture can be a solution to cut down the production cost which comprises of a large share in harvesting labor expense. To create such automation, the autonomous and orchard mapping are the first challenges. Due to the natural rubber industry is popular in particular part of the world, mostly in South East Asia, the vision mapping and autonomous on rubber tree orchards have not been done widely. This research aims to develop a model of vision mapping system which is suitable to the rubber tree plantations based on the common farming platform in Thailand. The vision model was designed to use single camera capturing calibrated targets which were placed on the rubber tree trunks. The length of the target from the captured image was then calculated in order to estimate the distance and position of the camera in relation to the orchard geometry. Because the larger size of the targets results in higher accuracy, however one large single target is not practical for installation on the trees, two separated targets technique was created. Three different lengths, 0.3, 0.5, and 0.7 meters, of separated targets were examined during the experiments. Percent error distances of target to camera, Z-direction, and target to center of camera, X-direction, were evaluated and also their uncertainties. The results have shown that largest target gave small error uncertainties, but the percentages of errors are quite similar among all sizes of targets. Because the sizes of the errors are proportion to the sizes of the targets, the percentage errors therefore adapt to the sizes of the targets. The experiments were carried out at 1–5 meter distances between target and camera that were set to cover the normal 3 meter distance between tree rows. It showed that the vision mapping can perform at about 8 cm repeatability in z-direction and about 13 cm in x-direction. This magnitude of errors seems to be large but it is actually practical for the orchard autonomous which is usually designed for low speed vehicle working on a large area.