Data and Folder Structure

πŸ“ Guide to form a data for different datatypes

This article guides you on organizing folder structure into Data for different types of dataset. You can find our sample datasets here.

LiDAR Fusion :milky-way:

3D Point Cloud folder structure

We designed our folder structure to separate different types of sensors, such as camera_image and lidar_point_cloud. When multiple sensors exist for each type, we differentiate them by adding "_" followed by an index number. Therefore, the final folder names will be camera_image_0 and lidar_point_cloud_0.

A LiDAR Fusion data is linked and combined by multiple files by the same file name across all sensors' folders like below:

Additionally, different sensors may have configurations, and each type has a separate folder for configurations. camera_config, for example, will affect all projection-related features

In some scenarios, you may have pre-label results, which should be organized in a result folder. As a consequence, our LiDAR Fusion folder should be organized as follows:

β”œβ”€β”€ camera_config // Camera config, more details can reference the "Camera Config" section
β”‚   β”œβ”€β”€ data1.json
β”‚   └── data2.json
β”œβ”€β”€ camera_image_0 // Camera image 0
β”‚   β”œβ”€β”€ data1.jpg
β”‚   └── data2.jpg
β”œβ”€β”€ camera_image_1 // Camera image 1
β”‚   β”œβ”€β”€ data1.jpg
β”‚   └── data2.jpg
β”œβ”€β”€ camera_image_2 // Camera image 2
β”‚   β”œβ”€β”€ data1.jpg
β”‚   └── data2.jpg
β”œβ”€β”€ lidar_point_cloud_0 // Lidar point cloud 0
β”‚   β”œβ”€β”€ data1.pcd
β”‚   └── data2.pcd
β”œβ”€β”€ result // Annotation result, more details can reference "Data Annotation Result" section
β”‚   β”œβ”€β”€ data1.json
β”‚   β”œβ”€β”€ data1_lidar_point_cloud_0_segmentation.pcd
β”‚   β”œβ”€β”€ data2.json
β”‚   └── data2_lidar_point_cloud_0_segmentation.pcd 

If you are going to organize your data in Batch or Scene , please refer to

πŸ“˜

LIDAR Fusion folder structure tips

  1. lidar_point_cloud_0 folder is mandatory for LiDAR datasets. Images without related LiDAR files will be ignored
  2. If your images are not shown, please check your image folder names. It is camera_image_0, not image_0
  3. If your 2D image results don't project or project wrong, please check your camera configs
  4. BasicAI offers an additional camera callibration feature; if you have a wrong projection, feel free to calibrate online!

Camera config

All projection-related features will be disabled when you have a wrong or empty camera configuration parameter.

The parameters in the Camera config file are:

  1. camera_intrinsic or camera_internal: a dictionary of the camera intrinsic matrix with four keys: fx, cx,fy, and cy.

  2. camera_extrinsic or camera_external: a list of the camera extrinsic matrix obtained by converting a 4x4 extrinsic matrix into a list

  3. distortionK: a list of radial distortion coefficients, only needed when your images have distortion, supports up to 8 parameters. It is optional. Delete this key when you don't need it

  4. distortionP:a list of tangential distortion coefficients, only needed when your images have distortion, supports up to 2 parameters. it is optional. Delete this key when you don't need it

The camera configs for the 2D images corresponding to the same PCD file are stored in a JSON file.

❗If there are multiple images for one data, the index of camera_image_0 should correspond to the order of the objects in the array of the JSON

Here is a sample of the camera_config.JSON file:

❗

Please don't use comments in camera config JSON

[
  {
    "camera_internal": {
      "fx": 382.06535583,
      "cx": 326.66902661,
      "fy": 421.05123478,
      "cy": 254.70249315
    },
    "camera_external": [
      0.76866726,
      0.04361939,
      0.63815985,
      -1.59,
      -0.63870827,
      -0.00174367,
      0.76944701,
      0.91,
      0.03467555,
      -0.9990467,
      0.02651976,
      0.96,
      0,
      0,
      0,
      1
    ],
    "rowMajor": true,
    "distortionK": [
      -0.30912646651268,
      0.0669714063405991
    ],
    "distortionP": [
      0.00262697599828243,
      0.00106896553188562
    ],
    "distortionInvP": [
      800.836,
      515.212,
      -36.9548,
      39.5822,
      85.4095,
      -23.9415,
      -40.625,
      32.0152,
      37.9534,
      9.22325
    ],
    "width": 1920,
    "height": 1280
  }
]

πŸ“˜

Camera config references

  1. For more details about camera_intrinsic and camera_extrinsic, please refer camera intrinsic and extrinsic
  2. For more details about distortion, please refer camera calibration

Compatible with LiDAR Basic

Images and camera configurations are optional for LiDAR Fusion and compatible with deprecated LiDAR Basic. The folder structure of the LiDAR Fusion dataset that only has LiDAR data should be organized as below

.
β”œβ”€β”€ lidar_point_cloud_0 // Lidar point cloud 0
β”‚   β”œβ”€β”€ data1.pcd
β”‚   └── data2.pcd
β”œβ”€β”€ result // Annotation result, more details can reference "Data Annotation Result" section
β”‚   β”œβ”€β”€ data1.json
β”‚   β”œβ”€β”€ data1_lidar_point_cloud_0_segmentation.pcd
β”‚   β”œβ”€β”€ data2.json
β”‚   └── data2_lidar_point_cloud_0_segmentation.pcd 

RGB colored PCDs

PCDs in the LiDAR Fusion dataset support RGB color values. You can change the render mode to RGB. If the PCDs you upload have an extra rgb field. Please view the sample below.

# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z rgb
SIZE 4 4 4 4
TYPE F F F U
COUNT 1 1 1 1
WIDTH 1877075
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 1877075
DATA ascii
30.976604 -0.63041502 -3.4124653 4285229679
30.999643 -0.67684662 -3.4000406 4285492592
12.957853 -68.076241 -2.4851601 4285690998
13.038503 -67.850151 -2.4753263 4285494398
13.031118 -67.778465 -2.4981151 4285165947
12.97576 -67.642067 -2.5326648 4285626750
13.014338 -67.527512 -2.5141547 4286349446
13.04153 -67.413116 -2.4968615 4285494913
13.053127 -67.361099 -2.4970944 4285429118
13.000272 -67.278008 -2.5389211 4286086016
13.028088 -67.22789 -2.5274782 4285494913

How to convert RGB values to decimal

RGB field is an RGB value in decimal. Python code below demonstrates how to convert RGB values like (255,255,255) to decimal 16762879.

rgb = (255, 255, 255)
decimal = (rgb[0] << 16) + (rgb[1] << 8) + rgb[2]
print(decimal)

Change to RGB mode in the LiDAR Fusion tool

After RGB PCDs are uploaded successfully, you will see an extra option under the color selection of Setting in the LiDAR tool. Switch to RGB to render the point cloud in RGB.

4D BEV folder structure

4D BEV is a data type between Scene and Data. Multiple images from different sensors and timestamps construct the image party of the data, while their PCDs will be constructed into a single PCD offline. We will construct data like:

The camera_config folder is for projections, and the result folder is for the pre-label results. Our final folder structure will be like the following:

.
β”œβ”€β”€ data1 // Data, the directory name is the data name
β”‚Β Β Β β”œβ”€β”€ global_pose // The pose offset parameters used for each frame during synthesis, leave it as "[]" inside the JSON if you don't have any pose info
β”‚Β Β  β”‚Β Β  └── data1.json
β”‚Β Β Β β”œβ”€β”€ camera_config // Camera config, more details can reference the "Point Cloud Camera Config" section
β”‚Β Β  β”‚Β Β  └── data1.json
β”‚Β Β Β β”œβ”€β”€ lidar_point_cloud // Synthetic point cloud obtained by merging multiple frames
β”‚Β Β  β”‚Β Β  └── data1.pcd
β”‚Β Β Β β”œβ”€β”€ camera_image_0 // All frame images of camera image 0
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ data1_01.png // will be the first image data in 
β”‚Β Β  β”‚Β Β  └── data1_02.png
β”‚Β Β Β β”œβ”€β”€ camera_image_1 // All frame images of camera image 1
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ data1_01.png
β”‚Β Β  β”‚Β Β  └── data1_02.png
│   └── ...
β”œβ”€β”€ data2 // Data, the directory name is the data name
β”‚Β Β Β β”œβ”€β”€ global_pose // The pose offset parameters used for each frame during synthesis, leave it as "[]" inside the JSON if you don't have any pose info
β”‚Β Β  β”‚Β Β  └── data1.json
β”‚Β Β Β β”œβ”€β”€ camera_config // Camera config, more details can reference the "Point Cloud Camera Config" section
β”‚Β Β  β”‚Β Β  └── data1.json
β”‚Β Β Β β”œβ”€β”€ lidar_point_cloud // Synthetic point cloud obtained by merging multiple frames
β”‚Β Β  β”‚Β Β  └── data1.pcd
β”‚Β Β Β β”œβ”€β”€ camera_image_0 // All frame images of camera image 0
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ data1_01.png // will be the first image data in 
β”‚Β Β  β”‚Β Β  └── data1_02.png
β”‚Β Β Β β”œβ”€β”€ camera_image_1 // All frame images of camera image 1
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ data1_01.png
β”‚Β Β  β”‚Β Β  └── data1_02.png
│   └── ...
β”œβ”€β”€ result // Annotation result, more details can reference "Data Annotation Result" section
β”‚Β Β Β β”œβ”€β”€ data1.json
│   └── data2.json

πŸ“˜

4D BEV folder structure tips

  1. There is NO "_0" after lidar_point_cloud folder
  2. Only the first LiDAR data will be parsed under a lidar_point_cloud folder; others will be ignored
  3. All images across different timestamps are aligned based on their names under image folders such as camera_config and camera_image_0. For example, data1_01.png will be the previous frame's image of data1_02.
  4. Scene level is not supported in 4D BEV

ImageπŸŒ„

Images can be directly uploaded to the Image dataset, as video can be treated as a Scene after frame extraction.

Directly upload

Both a zip with or without image_0 folder level are supported:

.
β”œβ”€β”€ 1.png 
β”œβ”€β”€ 2.png 
β”œβ”€β”€ 3.png 
β”œβ”€β”€ 4.png 
β”œβ”€β”€ 5.png 
β”œβ”€β”€ 6.png 
.
β”œβ”€β”€ image_0
β”‚Β Β Β β”œβ”€β”€ 1.png
β”‚Β Β Β β”œβ”€β”€ 2.png
β”‚Β Β Β β”œβ”€β”€ 3.png
β”‚Β Β Β β”œβ”€β”€ 4.png
β”‚Β Β Β β”œβ”€β”€ 5.png

Upload with results

If you want to upload data with pre-label results, you must put all images inside the image_0 folder. Pre-label results in the result folder correspond to image files by name.

.
β”œβ”€β”€ image_0
β”‚Β Β Β β”œβ”€β”€ 1.png
β”‚Β Β Β β”œβ”€β”€ 2.png
β”‚Β Β Β β”œβ”€β”€ 3.png
β”‚Β Β Β β”œβ”€β”€ 4.png
β”‚Β Β Β β”œβ”€β”€ 5.png
β”œβ”€β”€ result
β”‚Β Β Β β”œβ”€β”€ 1.json
β”‚Β Β Β β”œβ”€β”€ 2.json
β”‚Β Β Β β”œβ”€β”€ 3.json
β”‚Β Β Β β”œβ”€β”€ 4.json
β”‚Β Β Β β”œβ”€β”€ 5.json

Upload video as a Scene

If you are going to process a segmentation or instance task on a video, you can upload a video with a frame extract configuration. After extraction, each video will be treated as a Scene, while each frame will be treated as a Data. Since The video itself will be treated as a Scene, additional scene level beyond the video is prohibited.

.
β”œβ”€β”€ Video.MP4     ## will be treated as a Scene
β”‚Β Β Β β”œβ”€β”€ First_frame   ## will be treated as a Data
β”‚Β Β Β β”œβ”€β”€ Second_frame  ## will be treated as a Data

Audio & Video :movie-camera:

Suppose you plan to annotate the video's or audio's timeline or audio track instead of the imaginary annotation. In that case, you need to upload your data to the Audio & Video dataset as shown:

.
β”œβ”€β”€ Video1.MP4  ## will be treated as a Data
β”œβ”€β”€ Video2.MP4
β”œβ”€β”€ Audio1.MP3  

Each audio and video is Data; they cannot break each frame.

Text :page-facing-up:

If you want to annotate text entities and relationships, you can upload your data to the Text dataset as shown:

Directly upload

.
β”œβ”€β”€ Text1.txt  ## will be treated as a Data
β”œβ”€β”€ Text2.csv  ## Each row in the first column, except for the first row, will be treated as a Data
β”œβ”€β”€ Text3.xlsx
β”œβ”€β”€ Text4.xls

Each .txt file will be considered as a Data. For .csv/.xlsx/.xls files, only the text in the first column will be parsed, and each row, excluding the first row, will be regarded as a single Data.

Upload as compressed files with results

If you want to upload data as compressed files with pre-labeled results, you can follow the format below. Both a zip with or without text_0 folder level are supported. However, the file type must be .txt.

.
β”œβ”€β”€ text_0 // Text 0
β”‚   β”œβ”€β”€ data1.txt
β”‚   └── data2.txt
β”œβ”€β”€ text_1 // Text 1
β”‚   β”œβ”€β”€ data1.txt
β”‚   └── data2.txt
β”œβ”€β”€ data // Data info, only for exporting, more details can reference "Data Info" section
β”‚Β Β  β”œβ”€β”€ ***.xlsx/xls/csv
β”‚Β Β  β”œβ”€β”€ data1.json
β”‚Β Β  └── data2.json
β”œβ”€β”€ result // Annotation result, more details can reference "Data Annotation Result" section
β”‚Β Β Β β”œβ”€β”€ data1.json
β”‚Β Β Β β”œβ”€β”€ data2.json
└── batch1 // Batch, the structure is similar to the root directory
Β Β Β  β”œβ”€β”€ text_0
Β Β Β Β β”œβ”€β”€ ...
Β Β Β Β β”œβ”€β”€ data
    └── result

What’s Next

Learn how to upload data with batch and scene.