The article describes the main steps for automatic creation of masks (to remove background) from rough 3D model. This scenario is suitable for a case when the object is static, and the camera moves around the object.
The object that we have modeled in the example is a car wheel placed on a table (background). The shooting scenario was to take images of one side of the wheel while moving the camera around the table; then turn the wheel over and take images of another side using the same approach. Then we have uploaded two subsets of images in separate chunks and built a rough 3D model for each side. On the next step we have used masking from model feature applied to each subset. Then a complete model had been generated in a combined chunk.
The workflow includes the following steps:
Building a rough model
Please check the recommendation for shooting scenarios in the articles available in the section: Image capture tips. In the scenario used for the current article the object had been captured from two sides and thus two subsets of images were acquired.
Once the images are taken and loaded in Metashape - main processing steps can be performed. The detailed information for 3D model reconstruction in Metashape can be found the article: 3D model reconstruction.
Since the detailed model is not required for masks generation, the processing time can be reduced if at the alignment step you set Accuracy value as Medium or Low.
After alignment is completed the model can be build from the tie points. For the purpose select Build Model command from the Workflow menu. Set the following recommended values for the parameters in the Build Model dialog:
For masking purposes it is enough to build a rough model using as the source data tie points or Low/Lowest quality depth maps, Face count parameter can be Medium or Low.
Once the model is built, remove extra parts related to the background using the Selection and crop tools on the Toolbar.
It is important to crop the model with precision, as it affects the quality of the masks edges determination.
Creating masks
Select the images on the Photos pane and access the context menu using the right mouse button. Select Import masks command, and in the corresponding dialog box, choose From Model option and Apply to All cameras:
The mask will be created automatically :
Follow the same approach for the second subset of photos (in a separate chunk):
Building model
The chunks with subsets of images can be merged once the masks were created. Select Workflow > Merge chunks command. In Merge Chunks dialog enable chunks that you want to merge. Since alignment procedure will be performed again for the combined data with masks - there is no need to merge data contained in each chunk (tie points, models, and so on).
In merged chunk re-align images using the masks created from the model. Select Workflow > Align photos and in the Align Photos dialog window specify the parameters. Make sure to enable Reset current alignment option, thus the image matching will be started from scratch:
Use the option Apply masks to: Key points - the areas under masks will be excluded from the feature detection procedure for each photo independently. Therefore no tie points that may affect the alignment would be detected on the background covered by masks.
The results of alignment procedure in the combined chunk are presented on the image below:
Select Workflow > Build Model to build the final model. Use Depths maps as source data and set the Quality parameter as High:
Spare wheel by Agisoft on Sketchfab