Sweet pepper and peduncle 3D datasets

This page contains sweet pepper and peduncle 3D annotated datasets.

Peduncle Detection of Sweet Pepper combining colour and 3D for autonomous crop harvesting

FirstImage_againThis video presents a visual detection method applied to the challenging task of sweet pepper peduncle detection. The peduncle attaches the crop to the plant (main stem) and is of interest for automated harvesting as accurate detection of the peduncle plays a key role in successful crop detachment. Although the colour of peduncles is usually in green that is highly discriminative with sweet red peppers, many of them are green and even mixed in colour. We thus make use of both colour and geometry information acquired from a RGB-D sensor and utilise a supervised-learning approach for the detection. The performance of the proposed method is demonstrated and evaluated by qualitative and quantitative results (i.e., The area-under-the-curve (AUC) of a precision-recall curve of the detection). We are able to achieve AUC of 0.71 for field-grown sweet peppers’ peduncle detection. Our experience and learning are return to the community through open documentation and manually annotated ground truth datasets.

RAL2016 peduncle detection paper

When using this dataset in your research, we will be happy if you cite us! please cite: bibtex


author={I. Sa and C. Lehnert and A. English and C. McCool and F. Dayoub and B. Upcroft and T. Perez},
journal={IEEE Robotics and Automation Letters},
title={Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting – Combined Colour and 3D Information},
keywords={Agriculture;Color;Feature extraction;Geometry;Image color analysis;Robots;Three-dimensional displays;Agricultural Automation;Computer Vision for Automation;RGB-D Perception;Robotics in Agriculture and Forestry},

Experimental results


Demonstration video

The video below demonstrates peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting – Combined Colour and 3D Information. Below time stamps show the summary of this video. Also, this video contains voice-over explanation, please adjust your volume of headset or speaker. 
(00:00-00:54) Definition of peduncle and importance and challenges of peduncle detection.
(00:54-01:50) Data collection procedures and detail (e.g., field trip location and 3D sample instances)
(01:50-02:22) Introducing colour and 3D geometrical features utilized in this paper.
(02:22-03:00) Support Vector Machine (SVM) model selection procedures (parameter sweeping)
(03:00-03:22) SVM model training and testing quantitative results (AUC curves)
(03:22-03:43) Qualitative detection results for red, green, and mixed sweet peppers without complex background.
(03:43-04:21) same as above except with complex background.

The tested system specifications

We tested on Ubuntu 14.04.2 (trusty) LTS with the Kernel 3.16.0-38-generic, x86_64 architecture machine. Point Cloud Library (PCL) 1.7 is installed for storing and visualisation and MeshLab v1.3.2_64bit (Feb 23 2014). Intel RealSense F200 is utilised for capturing RGB-D data and Kinect Fusion included in the PCL library is used for 3D model reconstruction.


Please download and extract file below. (1.02GB)

Google drive link

The figure below displays information such as # of vertices and Red/Green/Noise ratio. Please note that the ratio is heuristically measured by naked eyes and can be only utilised for rough quality inspection.


After the extract, you can find “RAL2017_dataset_for_release” folder that has the folder structure below.

There are two folders named “field_trip1″ and “field_trip2″. Each folder contains four subfolders; “capsicum”, “peduncle”, “capsicum_peduncle”, and “full_model”.


We provide two types of data format. i.e., Point Cloud Data file format (PCD) and MeshLab format (PLY). The table below shows the number of samples in the subfolders for the corresponding file format.

PCD file format

PCD files are stored as Binary format with available dimensions of “x y z rgba”. PCL 1.7 is utilised and available as binary installation or source code.

If you want to see a PCD file, go to the folder containing PCD file you want to see and issue the command below.

pcl_viewer ./2015-10-29-12-53-39_kinfu_output_1.pcd

You should be able to see PCD viewer window.


PLY files

We use MeshLab v1.3.2_64bit (Feb 23 2014) and simply double clicking on a PLY file should work.

4 thoughts on “Sweet pepper and peduncle 3D datasets

  1. Omid

    My name is Omid. I’m from Iran. I have a project like you. I am working in 3D points classification in agricultural application. can you help me in this field.? In programing to compute surface Normals and PFH I need your help. if is possible for you send for me your codes.
    Thank you

  2. enddl22 Post author

    Hello Omid,
    Thank you for your interests on our work. Unfortunately, I can’t share the code due to IP issue. But it might be straight-forward to extract PFH (http://pointclouds.org/documentation/tutorials/pfh_estimation.php) and (https://github.com/daviddoria/Examples/blob/master/c%2B%2B/PCL/Descriptors/PointFeatureHistogram/PointFeatureHistogram.cpp) from the provided 3D dataset and feed them to SVM or your classifier (http://scikit-learn.org/stable/modules/svm.html).
    This is a conventional supervised learning and there would be plenty of examples that you can easily follow.

  3. Omid

    I have a question.
    when we use of rgb-d sensor using one point cloud is enough or we need to stitch multiple points cloud to each other? we need a specific software or ICP algoritm is good?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>