农业大数据学报 ›› 2025, Vol. 7 ›› Issue (4): 446-457.doi: 10.19788/j.issn.2096-6369.000115

• 数据处理与分析 • 上一篇    下一篇

多模态数据融合驱动的野外牧场虚拟电子围栏牲畜在栏判断模型

李世杰1,3,5,6(), 孔繁涛3,4, 曹姗姗2,4, 孙伟2,4,*()   

  1. 1.新疆农业大学计算机与信息工程学院乌鲁木齐 830052
    2.中国农业科学院农业信息研究所北京 100081
    3.中国农业科学院农业经济与发展研究所北京 100081
    4.中国农业科学院国家南繁研究院海南三亚 572024
    5.智能农业教育部工程研究中心乌鲁木齐 830052
    6.新疆农业信息化工程技术研究中心乌鲁木齐 830052
  • 收稿日期:2025-05-09 修回日期:2025-06-12 出版日期:2025-12-26 发布日期:2025-12-26
  • 通讯作者: 孙伟,E-mail: sunwei02@caas.cn
  • 作者简介:李世杰,E-mail:18253802659@163.com
  • 基金资助:
    国家重点研发计划资助(2024YFD200030502)

Multimodal Data Fusion-Driven Virtual Electronic Fence Livestock Presence Judgment Model for Field Pastures

LI ShiJie1,3,5,6(), KONG FanTao3,4, CAO ShanShan2,4, SUN Wei2,4,*()   

  1. 1. College of Computer and Information Engineering, Xinjiang Agricultural University, Urumqi 830052, China
    2. Agricultural Information Institute of CAAS, Beijing 100081, China
    3. Institute of Agricultural Economics and Development, Chinese Academy of Agricultural Sciences, Beijing 100081, China
    4. National Nanfan Research Institute of CAAS, Sanya 572024, Hainan, China
    5. Xinjiang Agricultural Informatization Engineering Technology Research Center, Urumqi 830052, China
    6. Engineering Research Center for Intelligent Agriculture, Ministry of Education, Urumqi 830052, China
  • Received:2025-05-09 Revised:2025-06-12 Published:2025-12-26 Online:2025-12-26

摘要:

传统的野外牧场布设的铁丝网等物理围栏不利于牲畜转场、野生动物迁徙和草原生态连通性,现有虚拟电子围栏多借助于电子地图和牲畜个体佩戴的接触式智能项圈定位,动物应激反应大、设备易脱落且数据维护成本高。融合放牧机器人采集的双目立体视觉、GPS定位和IMU三类传感器数据,构建多模态数据融合驱动的牲畜位置感知与在栏判断模型。以野外牧场自然放牧状态下的牛只为研究对象,基于高德地图API构建牧场虚拟电子围栏边界数据;采用YOLOv8s模型提取基于双目立体图像的牛只个体目标信息,利用双目立体图像的深度信息解析识别的牛只目标与放牧机器人之间的空间距离信息,进而融合放牧机器人GPS绝对定位数据和IMU位姿数据,采用扩展卡尔曼滤波算法进行牛只空间位置的地理空间坐标映射,解算机器视野下牛只定位的经纬度坐标数据;引入顶点微调策略和缓冲区预警机制,采用改进的射线法(Pnpoly算法)获取虚拟电子围栏的牛只在栏判断数据。连续采集200条牛只移动轨迹数据,在凸多边形、凹多边形与不规则边界的虚拟电子围栏场景下进行数据融合、解析与获取的实验验证,在栏判断准确率97.8%,较传统算法提升4.3%。结果表明,基于机器视觉与传感器融合的多模态数据驱动方法在野外牧场环境下具备较强适应性和工程应用价值,可为牲畜管理提供非接触式、高精度、持续稳定的虚拟电子围栏空间管理数据。

关键词: 智能牧场管理, 虚拟电子围栏, 多模态数据融合, 在栏判断算法

Abstract:

Physical fences such as barbed wire laid in traditional wild pastures are not conducive to livestock transhumance, wildlife migration and grassland ecological connectivity, and the existing virtual electronic fences are mostly localized with the help of electronic maps and contact smart collars worn by individual livestock, which result in high animal stress reaction, easy to fall off the equipment and high data maintenance cost. By integrating the binocular stereo vision, GPS positioning and IMU sensor data collected by the grazing robot, we construct a multimodal data fusion-driven livestock location sensing and in-fence judgment model. Taking the cattle under the natural grazing state in the field pasture as the research object, the virtual electronic fence boundary data of the pasture is constructed based on the Gaode map API; the YOLOv8s model is used to extract the individual target information of the cattle based on the binocular stereo image, and the depth information of the binocular stereo image is used to parse the spatial distance information between the recognized cattle target and the grazing robot, which is then fused with the GPS absolute positioning data and IMU positional data of the grazing robot. Then, fusing the GPS absolute positioning data of the grazing robot and the IMU position data, the Extended Kalman Filter algorithm is used to map the geospatial coordinates of the spatial position of the cows, and the latitude and longitude coordinates of the positioning of the cows under the field of view of the machine are solved; the vertex fine-tuning strategy and buffer warning mechanism are introduced, and the improved ray method (Pnpoly algorithm) is used to get the judgment data of the cows at the fence of the virtual electronic fence. We continuously collect 200 cattle movement trajectory data, and experimentally verify the data fusion, parsing and acquisition in the virtual electronic fence scenarios of convex polygon, concave polygon and irregular boundary, and the accuracy rate of in-fence judgment is 97.8%, which is 4.3% higher than that of the traditional algorithm. The results show that the multimodal data-driven method based on the fusion of machine vision and sensors has strong adaptability and engineering application value in the field ranch environment, and can provide non-contact, high-precision, continuous and stable virtual electronic fence spatial management data for livestock management.

Key words: smart pasture management, virtual electronic fence, multimodal data fusion, in-fence judgment algorithm