Package com.irurueta.ar.sfm
package com.irurueta.ar.sfm
This package contains classes related to Structure From Motion
techniques in order to obtain 3D reconstructed data from matched points
obtained when a camera moves
-
ClassDescriptionEstimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous LocationAnd Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple view pairs and using SLAM with absolute orientation and constant velocity model for scale and orientation estimation.Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.Contains configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views and using SLAM with absolute orientation and constant velocity model for scale and orientation estimationEstimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views and using SLAM with absolute orientation and constant velocity model for scale and orientation estimation.Estimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like accelerometer or gyroscope.Listener to retrieve and store required data to compute 3D reconstruction from sparse image point correspondences in multiple views and using SLAM with absolute orientation for scale and orientation estimation.Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.Contains configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like accelerometer or gyroscope.Listener to retrieve and store required data to compute 3D reconstruction from sparse image point correspondences in multiple views and using SLAM with absolute orientation for scale and orientation estimation.Estimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views and using SLAM with absolute orientation for scale and orientation estimation.BaseAbsoluteOrientationSlamPairedViewsSparseReconstructor<D extends BaseCalibrationData,
C extends BaseSlamPairedViewsSparseReconstructorConfiguration<D, C>, R extends BaseSlamPairedViewsSparseReconstructor<D, C, R, L, S>, L extends BaseSlamPairedViewsSparseReconstructorListener<R>, S extends AbsoluteOrientationBaseSlamEstimator<D>> Base class in charge of estimating pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and also in charge of estimating overall scene scale and absolute orientation by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.BaseAbsoluteOrientationSlamSparseReconstructor<D extends BaseCalibrationData,C extends BaseSlamSparseReconstructorConfiguration<D, C>, R extends BaseSlamSparseReconstructor<D, C, R, L, S>, L extends BaseSlamSparseReconstructorListener<R>, S extends AbsoluteOrientationBaseSlamEstimator<D>> Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in multiple views and also in charge of estimating overall scene scale and absolute orientation by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.BaseAbsoluteOrientationSlamTwoViewsSparseReconstructor<D extends BaseCalibrationData,C extends BaseSlamTwoViewsSparseReconstructorConfiguration<D, C>, R extends BaseSlamTwoViewsSparseReconstructor<D, C, R, L, S>, L extends BaseSlamTwoViewsSparseReconstructorListener<R>, S extends AbsoluteOrientationBaseSlamEstimator<D>> Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views and also in charge of estimating overall scene scale and absolute orientation by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.BasePairedViewsSparseReconstructor<C extends BasePairedViewsSparseReconstructorConfiguration<C>,R extends BasePairedViewsSparseReconstructor<C, R, L>, L extends BasePairedViewsSparseReconstructorListener<R>> Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in pairs of views.BasePairedViewsSparseReconstructorConfiguration<T extends BasePairedViewsSparseReconstructorConfiguration<T>>Base class containing configuration for a paired view based sparse re-constructor.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.BaseSlamPairedViewsSparseReconstructor<D extends BaseCalibrationData,C extends BaseSlamPairedViewsSparseReconstructorConfiguration<D, C>, R extends BaseSlamPairedViewsSparseReconstructor<D, C, R, L, S>, L extends BaseSlamPairedViewsSparseReconstructorListener<R>, S extends BaseSlamEstimator<D>> BaseSlamPairedViewsSparseReconstructorConfiguration<C extends BaseCalibrationData,T extends BaseSlamPairedViewsSparseReconstructorConfiguration<C, T>> Contains base configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.BaseSlamPairedViewsSparseReconstructorListener<R extends BaseSlamPairedViewsSparseReconstructor<?,?, ?, ?, ?>> Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.BaseSlamSparseReconstructor<D extends BaseCalibrationData,C extends BaseSlamSparseReconstructorConfiguration<D, C>, R extends BaseSlamSparseReconstructor<D, C, R, L, S>, L extends BaseSlamSparseReconstructorListener<R>, S extends BaseSlamEstimator<D>> Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in multiple views and also in charge of estimating overall scene scale by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.BaseSlamSparseReconstructorConfiguration<C extends BaseCalibrationData,T extends BaseSlamSparseReconstructorConfiguration<C, T>> Contains base configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.BaseSlamTwoViewsSparseReconstructor<D extends BaseCalibrationData,C extends BaseSlamTwoViewsSparseReconstructorConfiguration<D, C>, R extends BaseSlamTwoViewsSparseReconstructor<D, C, R, L, S>, L extends BaseSlamTwoViewsSparseReconstructorListener<R>, S extends BaseSlamEstimator<D>> Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views and also in charge of estimating overall scene scale by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.BaseSlamTwoViewsSparseReconstructorConfiguration<C extends BaseCalibrationData,T extends BaseSlamTwoViewsSparseReconstructorConfiguration<C, T>> Contains base configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.BaseSlamTwoViewsSparseReconstructorListener<R extends BaseSlamTwoViewsSparseReconstructor<?,?, ?, ?, ?>> Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views.BaseSparseReconstructor<C extends BaseSparseReconstructorConfiguration<C>,R extends BaseSparseReconstructor<C, R, L>, L extends BaseSparseReconstructorListener<R>> Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences for multiple views.Base class containing configuration for a sparse re-constructor supporting multiple views.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.BaseTwoViewsSparseReconstructor<C extends BaseTwoViewsSparseReconstructorConfiguration<C>,R extends BaseTwoViewsSparseReconstructor<C, R, L>, L extends BaseTwoViewsSparseReconstructorListener<R>> Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views.BaseTwoViewsSparseReconstructorConfiguration<T extends BaseTwoViewsSparseReconstructorConfiguration<T>>Base class containing configuration for a two view sparse re-constructor.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views.Exception raised if reconstruction process is cancelled.Estimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) with constant velocity model for overall scale estimation.Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like and accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple view pairs and using SLAM with constant velocity model for scale estimation.Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) with constant velocity model for overall scale estimation.Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondence in multiple views and using SLAM with constant velocity model for scale estimation.Estimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) with constant velocity model for overall scale estimation.Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondence in two views and using SLAM with constant velocity model for scale estimation.Estimates an initial pair of cameras in the metric stratum (up to an arbitrary scale) using a given fundamental matrix and assuming zero skewness and principal point at the origin for the intrinsic parameters of estimated cameras.Estimates an initial pair of cameras in the metric stratum (up to an arbitrary scale) using a given fundamental matrix to obtain the Dual Image of Absolute Conic by solving Kruppa equations to obtain the Essential matrix, so that once it is computed it can be used to determine best pair of camera poses and translations by triangulating a set of matched points and checking that their triangulation lies in front of cameras.Estimates an initial pair of cameras in the metric stratum (up to an arbitrary scale) using a given fundamental matrix and provided intrinsic parameters on left and right views (which can be obtained by offline calibration) to compute the essential matrix and choose the best combination of rotation and translation on estimated cameras so that triangulated 3D points obtained from provided matched 2D points are located in front of the estimated cameras.Contains data of estimated camera.Contains data of estimated fundamental matrix.Exception raised if reconstruction process fails for some reason.Exception raised if initial cameras estimation fails.Estimates initial cameras to initialize geometry in a metric stratum.Listener in charge of attending events for an InitialCamerasEstimator, such as when estimation starts or finishes.Indicates method used for initial estimation of cameras.Class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences from multiple views and known initial camera baseline (camera separation), so that cameras and reconstructed points are obtained with exact scale.Contains configuration for a multiple view sparse re-constructor assuming that the initial baseline (separation between initial cameras) is known.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences when baseline is known.Class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views and known camera baseline (camera separation), so that both cameras and reconstructed points are obtained with exact scale.Contains configuration for a two view sparse re-constructor assuming that the baseline (separation between cameras) is known.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views when baseline is known.Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using LMedS algorithm.Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding an LMSE solution to homogeneous systems of equations.Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding an LMSE solution to homogeneous systems of equations.Contains data relating matched 2D points and their reconstructions.Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using MSAC algorithm.Class in charge of estimating pairs of cameras and 3D reconstruction points from sparse image point correspondences.Contains configuration for a paired view sparse re-constructor.Listener to retrieve and and store required data to compute a 3D reconstruction from sparse image point correspondences.This class takes matched pairs of 2D points corresponding to a planar scene, estimates an homography relating both sets of points, decomposes such homography induced by the 3D plane on the scene, and uses such decomposition to determine the best epipolar geometry (e.g. fundamental matrix) by using the essential matrix and provided intrinsic camera parameters on both views corresponding to both sets of points to reconstruct points and choose the solution that produces the largest amount of points located in front of both cameras.Listener to be notified of events generated by a planar best fundamental matrix estimator and re-constructor.Raised if triangulation of 3D points fails for some reason (i.e. degenerate geometry, numerical instabilities, etc).Type of 3D point triangulator.Contains color information for a given point.Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using PROMedS algorithm.Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using PROSAC algorithm.Robustly triangulates 3D points from matched 2D points and their.Contains data of a reconstructed 3D point.Exception raised if a re-constructor fails or is cancelled.Abstract class for algorithms to robustly triangulate 3D points from matched 2D points and their corresponding cameras on several views.Listener to be notified of events such as when triangulation starts, ends or when progress changes.Contains data of a 2D point sample on a given view.Base class to triangulate matched 2D points into a single 3D one by using 2D points correspondences on different views along with the corresponding cameras on each of those views.Handles events generated by a SinglePoint3DTriangulator.Estimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) for overall scale estimation.Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between initial cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple view pairs and using SLAM for scale estimation.Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) for overall scale estimation.Contains configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between initial cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views and using SLAM for scale estimation.Estimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) for overall scale estimation.Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views and using SLAM for scale estimation.Class in charge of estimating cameras and 3D reconstruction points from sparse image point correspondences.Contains configuration for a multiple view sparse re-constructor.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences.Base exception for all exceptions in the com.irurueta.ar.sfm package.Class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views.Contains configuration for a two view sparse re-constructor.Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views.Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding a weighted solution to homogeneous systems of equations.Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding a weighted solution to an inhomogeneous system of equations.