Journal of Agricultural Big Data ›› 2025, Vol. 7 ›› Issue (4): 446-457.doi: 10.19788/j.issn.2096-6369.000115

Previous Articles     Next Articles

Multimodal Data Fusion-Driven Virtual Electronic Fence Livestock Presence Judgment Model for Field Pastures

LI ShiJie1,3,5,6(), KONG FanTao3,4, CAO ShanShan2,4, SUN Wei2,4,*()   

  1. 1. College of Computer and Information Engineering, Xinjiang Agricultural University, Urumqi 830052, China
    2. Agricultural Information Institute of CAAS, Beijing 100081, China
    3. Institute of Agricultural Economics and Development, Chinese Academy of Agricultural Sciences, Beijing 100081, China
    4. National Nanfan Research Institute of CAAS, Sanya 572024, Hainan, China
    5. Xinjiang Agricultural Informatization Engineering Technology Research Center, Urumqi 830052, China
    6. Engineering Research Center for Intelligent Agriculture, Ministry of Education, Urumqi 830052, China
  • Received:2025-05-09 Revised:2025-06-12 Online:2025-12-26 Published:2025-12-26
  • Contact: SUN Wei

Abstract:

Physical fences such as barbed wire laid in traditional wild pastures are not conducive to livestock transhumance, wildlife migration and grassland ecological connectivity, and the existing virtual electronic fences are mostly localized with the help of electronic maps and contact smart collars worn by individual livestock, which result in high animal stress reaction, easy to fall off the equipment and high data maintenance cost. By integrating the binocular stereo vision, GPS positioning and IMU sensor data collected by the grazing robot, we construct a multimodal data fusion-driven livestock location sensing and in-fence judgment model. Taking the cattle under the natural grazing state in the field pasture as the research object, the virtual electronic fence boundary data of the pasture is constructed based on the Gaode map API; the YOLOv8s model is used to extract the individual target information of the cattle based on the binocular stereo image, and the depth information of the binocular stereo image is used to parse the spatial distance information between the recognized cattle target and the grazing robot, which is then fused with the GPS absolute positioning data and IMU positional data of the grazing robot. Then, fusing the GPS absolute positioning data of the grazing robot and the IMU position data, the Extended Kalman Filter algorithm is used to map the geospatial coordinates of the spatial position of the cows, and the latitude and longitude coordinates of the positioning of the cows under the field of view of the machine are solved; the vertex fine-tuning strategy and buffer warning mechanism are introduced, and the improved ray method (Pnpoly algorithm) is used to get the judgment data of the cows at the fence of the virtual electronic fence. We continuously collect 200 cattle movement trajectory data, and experimentally verify the data fusion, parsing and acquisition in the virtual electronic fence scenarios of convex polygon, concave polygon and irregular boundary, and the accuracy rate of in-fence judgment is 97.8%, which is 4.3% higher than that of the traditional algorithm. The results show that the multimodal data-driven method based on the fusion of machine vision and sensors has strong adaptability and engineering application value in the field ranch environment, and can provide non-contact, high-precision, continuous and stable virtual electronic fence spatial management data for livestock management.

Key words: smart pasture management, virtual electronic fence, multimodal data fusion, in-fence judgment algorithm