![]() ![]() I’m also doing this in the context of WebODM… I’ve modified the NodeODM Docker image to output the opensfm data so that I can access it at -media-dir location for running this new action below. I’m quite new to this so any insight/help is welcome and much appreciated! I understand I need some form of scaling with the depth of the point, but there are no details in the documentation on how to obtain the correct depth for the given Any chance you have a solution I could look at? Trying to get this working but I’m not quite there… here is what I’ve got so far using the resources linked above. T3D_world = pose.inverse().transform(bearing) Pt2D = cam.pixel_to_normalized_coordinates(pt2D_px) I have tried the following: nube = o3d.io.read_point_cloud('./opensfm/undistorted/openmvs/scene_dense_dense_y') From my limited experience in Metashape, there is a direct approach for this mapping, however, I’m still unable to accomplish this even indirectly. However, I did not manage to find the suitable methods to map a 2D pixel in the original image to the corresponding point in the 3D point cloud. Pt2D_px = cam.normalized_to_pixel_coordinates(pt2D) The inverse mapping (point cloud to pixel values) is quite straightforward using the build-in methods in OpenSfM: shot = rec.shots I am interested in mapping 2D pixel values from the original images to the point cloud.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |