This tutorial covers the main processing stages related to the multispectral data processing:

  •  adding images using mutli-camera system (multispectral cameras) approach
  •  reflectance calibration procedure
  •  camera alignment and optimization
  •  surface reconstruction (dense cloud, mesh, digital elevation model)
  •  orthomosaic generation
  •  index calculation using Raster Calculator tool
  •  data export

(to follow the tutorial please use: Agisoft Metashape Professional 1.7.x)

The dataset has been acquired with senseFly eBee X platform with mounted MicaSense RedEdge-MX sensor.

Metashape is using MicaSense guidelines for reflectance calibration and multispectral data treatment and this approach has been confirmed (by inspecting Metashape results) during joint collaboration between Agisoft and MicaSense teams. Additional information regarding MicaSense recommendations can be found on their web-site and in GitHub examples:
If reflectance calibration procedure is not applied (cannot be done without radiometric panel or DLS sun sensor data), MicaSense would only apply to vignette corrections, providing that the related information was stored in the image metadata. 


1. Add Photos.

2. Locate reflectance panels. 

3. Input reflectance (albedo) values of the calibration panel for the bands.

4. Run reflectance calibration.

5. Align Photos.

6. Optimize Cameras.

7. Build Dense Cloud.

8. Build DEM.

9. Build Orthomosaic.

10. Calculate required index information.

11. Export the results.

Appendix A. Manual masking of the calibration images with the radiometric panel.

Appendix B. Reflectance panel database.

Appendix C. Controlling reflectance calculation.

Appendix D. Vignetting

1. Add Photos.

Images from MicaSense RedEdge-MX, MicaSense RedEdge, MicaSense Altum, Parrot Sequoia and DJI Phantom 4 Multispecan be loaded at once for all bands.
Open Workflow menu and choose Add Photos option. Select all images including reflectance calibration images and click OK button. In the Add Photos dialog choose “Multi-camera system” option:

If the images are stored in several folders, the same operation should be repeated for each folder. Please remember to add reflectance calibration images!

Metashape Pro can automatically sort out those calibration images to the special camera folder in the Workspace pane if the image meta-data says that the images are for calibration. The images will be disabled automatically (not to be used in actual processing). 

If there is no such information in the image meta-data, the calibration images will be detected automatically at the next step or they can be manually arranged to a camera folder for calibration images in the Workspace pane, if custom reflectance panel is used for the calibration. 

In case the project is exported in PSX project format from senseFly eMotion application, skip this step and just open the project using File Menu > Open command in Agisoft Metashape Professional.

2. Locate reflectance panels. 

Open Tools Menu and choose Calibrate Reflectance option. Press “Locate Panels” button:

As a result, the images with the panel will be moved to a separate folder and the masks would be applied to cover everything on the images except the panel itself. If the panels are not located automatically, use the manual approach described in Appendix A. If you are using the panel for the first time, and its calibration is not added to Metashape Pro internal database yet, you will be prompted to load a calibration from a CSV file: 

If you don't have a CSV file with calibration information, you can input calibration values manually at the next step. If you own MicaSense radiometric panel, you can request the corresponding CSV file from MicaSense directly: 

3. Input reflectance (albedo) values of the calibration panel for the bands.

If reflectance calibration was loaded from CSV file or calibration database on the previous step, you can proceed to step 4. After the panels are located, the reflectance values corresponding to each band should be input according to the panel certificate. It can be done manually in Calibrate Reflectance dialog or using “Select Panel...” button as described in the Appendix B of this instruction.

4. Run reflectance calibration.

Check on “Use reflectance panels” and “Use sun sensor” options in the Calibrate Reflectance dialog to perform calibration based on panel data and/or image meta information. Click OK to start the calibration process.

- - - - Proceed with the data processing - - - - -

5. Align Photos.

Open Align Photos dialog from the Workflow menu and use the following parameters:

Preselection options will allow to speed up the processing for large datasets.

The result of the align photos operation will be shown in the Model view as the estimated camera locations and tie point cloud (representing the matching points between the images): 

Blue rectangles represent the estimated camera locations and orientation. The bounding box defines the area of reconstruction for the further processing stages, such as Build Dense Cloud, Build Mesh, Build Tiled Model, Build DEM, Build Orthomosaic, and could be adjusted manually, if necessary, via Rotate Region, Resize Region and Move Region tools. 

6. Optimize Cameras.

To improve the accuracy of the alignment useOptimize Cameras option from the Tools menu and select the following parameters for optimization:

7. Build Dense Cloud.

Generation of the Dense Point Cloud allows to reconstruct more accurate surface, thus improving the quality of the final orthomosaic. Open Build Dense Cloud dialog from the Workflow menu and use the following parameters:

  • Higher quality used will result in more accurate surface (bigger number of points), but will take longer time to process. In most cases Medium quality is sufficient for aerial data processing, especially in cases of low variations of the terrain.
  • Point colors may be disabled, if the dense cloud is not among the required task products; this is a way to reduce the processing time a little bit as well as to diminish disk space required to store the project data.

The resulting dense point cloud will be displayed in the Model view:

If the source bands have RGB labels, Metashape will try to display the dense cloud points colors accordingly. Otherwise the Primary Channel will be displayed in grayscale mode. 

To switch the primary channel use Set Primary Channel option from the Tools menu:

8. Build DEM.

Build Digital Elevation Model (DEM) step allows to generate an accurate surface to be used as a source for the orthomosaic generation in a shorter period of time, than Build Mesh operation. While the latter surface type may be required for complex surface/terrain types being reconstructed. 

Open Build DEM dialog from the Workflow menu and use the following parameters:

  • In practice you only need to select the Dense Cloud as a source for the reconstruction, specify the coordinate system for the DEM referencing and choose the Interpolation method. 
  • Extrapolated option would allow to get the surface without any gaps being extrapolated to the bound box sides, while default option (Interpolation - enabled) would leave the valid elevation values only for areas that are seen from at least one aligned camera.

After processing is finished, open Ortho view mode to display the reconstructed DEM surface by double-clicking on the DEM instance in the chunk's contents on the Workspace pane:

Ortho view:

9. Build Orthomosaic.

Open Build Orthomosaic dialog from the Workflow menu and use the following parameters:

  • Use DEM as a source surface. If necessary adjust the orthomosaic generation resolution by clicking on Metres button.
    Pay attention to the Blending Mode option - if you wish to exclude any blending or averaging applied to the images, then select Disabled option as the most appropriate in such case.

To review the orthomosaic generation result switch back to Ortho view mode by double-clicking on the orthomosaic label in the Workspace pane:

10. Calculate required index information.

UseSet Raster Transform option from the Tools menu to open Raster Calculator dialog. On the Transform tab specify the index values that you would like to calculate from the source data.

More than one formula can be input, if it is necessary to export the orthomosaic with several output bands related to the different indices, or if the calculated indices should be represented in false colors mode.

On the Palette tab select how one of the calculated indices should be visualized or use False Colors representation for three output bands (note that for this approach the values of the output bands used should be in 0 - 1 range for proper RGB representation, the values would be automatically scaled to 8-bit RGB representation in False Colors mode).  

The following picture shows the representation of the single output band defined on the Transform tab (B1 in this case). The color representation of the index can be selected from the list of presets, loaded from *.clr file or modified manually, if necessary.
Range values under the histogram define the absolute values for the selected index (output band), the color values from the palette section will be scaled to selected range in the following way: Min. Range value corresponds to the 0 value in the palette color scale, and Max. Range value corresponds to 1 value of the color palette:

When False Colors option is selected the histogram area can be ignored. It is only necessary to select the correspondence between the output bands and RGB colors in the False Colors mode: 

11. Export the results.

To export the results of the orthomosaic generation use File Menu > Export > Export Orthomosaic section.

Pay attention to the Raster Transform section in the Export Orthomosaic dialog. The following options are available:
     None - means that the exported orthomosaic will contain the same number of bands corresponding to the source data, any transformation formulas will be ignored;

   Index Value - this option allows to save the output bands defined by the transformation formulas in the Raster Calculator dialog;

     Index Colors - save the orthomosaic in RGB colors according to the Palette settings in the Raster Calculator dialog. The exported raster image will look identical to the orthomosaic display in the Ortho view mode, providing that "Enabled" option is checked on in the Transform section of the Raster Calculator dialog.

Some external packages for the orthomosaic analysis are not properly treating the transparency saved in the alpha channel (for example, users report the problems in Q-GIS and ArcGIS), therefore we recommend to disable "Write alpha channel" option in the Export dialog.

Metashape Professional is performing the reflectance calibration operation according to MicaSense recommendations. So the values in the output bands would sill be 16 bit integer values like the input values, but 100% reflectance for each band would correspond to the middle of the available range, i.e. to 32768 value. In case it is necessary to export the reflectance normalized to 0 - 1 range, then it is required to create Output bands in the Raster Calculator dialog and for each one of them input the formular that divides the source value by the normalization factor: B1/32768; B2/32768; B3/32768; B4/32768; B5/32768: 

And then in the Export Orthomosaic dialog select Index Value option in the Raster Transform section. 

Appendix A. Manual masking of the calibration images with the radiometric panel.

If the panels cannot be detected automatically for some reason (for example, if the calibration plate is not supported and is different from MicaSense panels) and the “Calibration images” folder is not created automatically, then create the camera group in the Workspace manually and name it “Calibration images”, then move the calibration cameras to this group and disable them. To create a new folder in the Workspace pane select the images that contain the calibration panel, right-click on selection and choose “Move Cameras” > “New Camera Group” option, then right-click on the newly created folder and name it “Calibration images” (without the quotes). Also in the case of a manual approach, it is necessary to apply masks to the calibration images manually. To do that for every calibration image (every camera in the “Calibration images” folder of the active chunk) it is necessary to create the mask – mask out everything that is not related to the calibration plate:

So that only the part of the plate is unmasked and everything else is masked out. 

It is necessary to apply masks for each calibration image and for each band! To switch between bands use “Set Primary Channel” in the context menu after right-clicking on the chunk's label in the Workspace pane. 

After the masking procedure is finished, proceed to step 3 of this instruction and input the reflectance values for each band for the calibration panel, then proceed to the calibration procedure.

Appendix B. Reflectance panel database.

Metashape stores the information about the used reflectance panels. Thus, when calibration images of the same panel are detected, Metashape will automatically suggest the reflectance values from internal database. The database of the reflectance panels may be edited via “Select Reflectance Panel” dialog accessible by clicking on “Select panel” button in the “Calibrate Reflectance” dialog.

In the “Select Reflectance Panel” dialog it is possible to:

- load reflectance information from a CSV file; 

- save current table (wavelength / reflectance factor); 

- edit the name of a panel in the database (the name is used in “Calibrate Reflectance” dialog); 

- remove the panel information from the database. 

Appendix C. Controlling reflectance calculation.

Reflectance calculation can be enabled/disabled separately for each sensor in the Camera Calibration dialog. 

If the reflectance calibration results should be taken into account during the orthomosaic generation process, open the Camera Calibration dialog and ensure that “Normalize band sensitivity” option is checked on:

If “Normalize band sensitivity” option is unchecked, the resulting orthomosaic will contain default color values without any update thanks to calibration with the reflectance panel or image meta-information (including information from the sun sensor).

Appendix D. Vignetting

Vignetting is modeled in Metashape using a 3-degree bivariate polynomial:

V(x, y) = exp(sumii cij * xi * yj),

where x and y represent normalized pixel coordinates, so that top left corner of the image has coordinates (-1, -1), and the bottom right corner has coordinates (1, 1).

To compensate for vignetting in the image each pixel value is divided by the corresponding vignetting factor.

I'ij = Iij / exp(sumij cij * xi * yj),


x = 2 * (i + 0.5) / w - 1

y = 2 * (j + 0.5) / h - 1

i, j - integer column and row pixel coordinates

w, h – image width and height in pixels

x, y - normalized pixel coordinates

Iij - pixel intensity in the original image with vignetting

I'ij - pixel intensity in the corrected image without vignetting