All Classes and Interfaces

Class
Description
Base class for estimating mean and covariance of noise in control values when the system state is held constant (only noise is provided as control input).
Base class to estimate position, velocity, acceleration and orientation of a device using sensor data such as accelerometers, gyroscopes and magnetic fields (to obtain absolute orientation).
Contains control calibration data for an absolute orientation constant velocity model SLAM estimator during Kalman filtering prediction stage.
Processes data to estimate calibration for absolute orientation with constant velocity model SLAM estimator.
Estimates position, velocity, acceleration, orientation and angular speed using data from accelerometer, gyroscope and magnetic field, assuming a constant velocity model.
Estimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.
Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous LocationAnd Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple view pairs and using SLAM with absolute orientation and constant velocity model for scale and orientation estimation.
Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.
Contains configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views and using SLAM with absolute orientation and constant velocity model for scale and orientation estimation
Estimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.
Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views and using SLAM with absolute orientation and constant velocity model for scale and orientation estimation.
Contains control calibration data for an absolute orientation SLAM estimator during Kalman filtering prediction stage.
Processes data to estimate calibration for absolute orientation SLAM estimator.
Estimates position, velocity, acceleration, orientation and angular speed using data from accelerometer, gyroscope and magnetic field.
Estimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.
Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like accelerometer or gyroscope.
Listener to retrieve and store required data to compute 3D reconstruction from sparse image point correspondences in multiple views and using SLAM with absolute orientation for scale and orientation estimation.
Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.
Contains configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like accelerometer or gyroscope.
Listener to retrieve and store required data to compute 3D reconstruction from sparse image point correspondences in multiple views and using SLAM with absolute orientation for scale and orientation estimation.
Estimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) with absolute orientation for overall scale and orientation estimation.
Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views and using SLAM with absolute orientation for scale and orientation estimation.
Non-robust fundamental matrix estimator for Affine camera projection model.
Compares two fundamental matrices using a pure algebraic difference to determine how similar they are.
Calibrates a camera in order to find its intrinsic parameters and radial distortion by using an alternating technique where first an initial guess of the intrinsic parameters, rotation and translation is obtained to model the camera used to sample the calibration pattern, and then the result is used to find the best possible radial distortion to account for all remaining errors.
Base exception for AR.
Base class in charge of estimating pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and also in charge of estimating overall scene scale and absolute orientation by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.
Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in multiple views and also in charge of estimating overall scene scale and absolute orientation by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.
Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views and also in charge of estimating overall scene scale and absolute orientation by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.
Contains control calibration data for a SLAM estimator during Kalman filtering prediction stage.
Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in pairs of views.
Base class containing configuration for a paired view based sparse re-constructor.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.
Base class for estimating mean and covariance of noise in control values when the system state is held constant (only noise is provided as control input).
Listener for implementations of this class.
Base class to estimate position, velocity, acceleration and orientation of a device using sensor data such as accelerometers and gyroscopes.
Listener for implementations of this class.
 
Contains base configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.
Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in multiple views and also in charge of estimating overall scene scale by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.
Contains base configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.
Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views and also in charge of estimating overall scene scale by means of SLAM (Simultaneous Location And Mapping) using data obtained from sensors like accelerometers or gyroscopes.
Contains base configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views.
Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences for multiple views.
Base class containing configuration for a sparse re-constructor supporting multiple views.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views.
Base class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views.
Base class containing configuration for a two view sparse re-constructor.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views.
Base class for calibration estimators.
Base exception class for calibration package.
Calibrates a camera in order to find its intrinsic parameters and other parameters such as radial distortion.
Listener to be notified when calibration starts, finishes or any progress changes.
Contains camera calibrator methods.
Contains data obtained from a single picture using the camera.
Estimates the camera pose for a given homography and pinhole camera intrinsic parameters.
Exception raised if reconstruction process is cancelled.
Contains coordinates of ideal points for a circle pattern.
Contains control calibration data for constant velocity model SLAM estimator during Kalman filtering prediction stage.
Processes data to estimate calibration for constant velocity model SLAM estimator.
Estimates position, velocity, acceleration and angular speed using data from accelerometer and gyroscope and assuming a constant velocity model.
Estimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) with constant velocity model for overall scale estimation.
Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like and accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple view pairs and using SLAM with constant velocity model for scale estimation.
Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) with constant velocity model for overall scale estimation.
Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondence in multiple views and using SLAM with constant velocity model for scale estimation.
Estimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) with constant velocity model for overall scale estimation.
Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondence in two views and using SLAM with constant velocity model for scale estimation.
Utility class to predict device state (position, orientation, linear velocity and angular velocity) assuming a constant velocity model (acceleration is assumed zero under no external force).
Exception raised when point correction fails when trying to be fit into a given epipolar geometry.
Fixes matched pairs of points so that they perfectly follow a given epipolar geometry.
Interface to handle events generated by Corrector instances.
Type of corrector to fix point matches under a given epipolar geometry.
This class accounts for any possible distortion that might occur on 2D points.
Raised when an error occurs while using a Distortion.
Kind of distortion.
The dual absolute quadric is the dual quadric tangent to the plane at infinity.
This class defines the interface for an estimator of the Dual Absolute Quadric (DAQ) assuming equal vertical and horizontal focal length, no skewness and principal point at the center of the image.
Thrown when DAQ estimation fails.
Listener to be notified when estimation starts, finishes or any progress changes.
Defines types of Dual Absolute Quadric estimators depending on their algorithm of implementation.
Estimates an initial pair of cameras in the metric stratum (up to an arbitrary scale) using a given fundamental matrix and assuming zero skewness and principal point at the origin for the intrinsic parameters of estimated cameras.
This is an abstract class for algorithms to robustly find the best DualAbsoluteQuadric (DAQ) for provided collection of cameras.
Listener to be notified of events such as when estimation starts, ends or when progress changes.
The dual image of the absolute conic (DIAC), is the projection of the dual absolute quadric using a given pinhole camera.
Estimates an initial pair of cameras in the metric stratum (up to an arbitrary scale) using a given fundamental matrix to obtain the Dual Image of Absolute Conic by solving Kruppa equations to obtain the Essential matrix, so that once it is computed it can be used to determine best pair of camera poses and translations by triangulating a set of matched points and checking that their triangulation lies in front of cameras.
Non-robust fundamental matrix estimator that uses 8 matched 2D points on left and right views.
Compares two fundamental matrices by estimating average epipolar distances.
Base exception for all exceptions in the com.irurueta.geometry.epipolar package.
Calibrates a camera in order to find its intrinsic parameters and radial distortion by first estimating the intrinsic parameters without accounting for radial distortion and then use an optimization algorithm to minimize error and adjust estimated camera pose, intrinsic parameters and radial distortion parameters.
The essential matrix defines the relation between two views in a similar way that the fundamental matrix does, but taking into account the intrinsic parameters of the cameras associated to both views.
Estimates an initial pair of cameras in the metric stratum (up to an arbitrary scale) using a given fundamental matrix and provided intrinsic parameters on left and right views (which can be obtained by offline calibration) to compute the essential matrix and choose the best combination of rotation and translation on estimated cameras so that triangulated 3D points obtained from provided matched 2D points are located in front of the estimated cameras.
Contains data of estimated camera.
Contains data of estimated fundamental matrix.
Exception raised if reconstruction process fails for some reason.
The fundamental matrix describes the epipolar geometry for a pair of cameras.
Compares two fundamental matrices to determine how similar they are.
Raised when fundamental matrices comparison fails.
Handles events produced by a FundamentalMatrixComparator.
Indicates method used to compare fundamental matrices.
Base class for a non-robust fundamental matrix estimator.
Exception raised if fundamental matrix estimation fails.
Listener to be notified of events generated by a non-robust fundamental matrix estimator.
Indicates method of non-robust fundamental matrix estimator.
Refines a fundamental matrix by taking into account an initial estimation, inlier matches and their residuals.
This is an abstract class for algorithms to robustly find the best Fundamental matrix for provided collections of matched 2D points.
Listener to be notified of events such as when estimation starts, ends or when progress changes.
Fixes matched pairs of points so that they perfectly follow a given epipolar geometry.
Fixes a single matched pair of points so that they perfectly follow a given epipolar geometry using the Gold Standard method, which is capable to completely remove errors assuming their gaussianity.
Refines the epipole of a fundamental matrix formed by an initial epipole estimation and an estimated homography.
Decomposes a 2D homography to extract its internal geometry structure.
Exception raised if homography decomposition fails.
Listener to be notified of events generated by an homography decomposer.
Contains decomposition data of an homography.
The image of the absolute conic, is the projection of the absolute quadric using a given pinhole camera.
This class defines the interface for an estimator of the Image of Absolute Conic (IAC).
Thrown when IAC estimation fails.
Listener to be notified when estimation starts, finishes or any progress changes
Image of Absolute Conic (IAC) estimator types.
This is an abstract class for algorithms to robustly find the best ImageOfAbsoluteConic (IAC) for provided collection of 2D homographies.
Listener to be notified of events such as when estimation starts, ends or when progress changes
Refines the epipole of a fundamental matrix formed by an initial epipole estimation and an estimated homography.
Exception raised if initial cameras estimation fails.
Estimates initial cameras to initialize geometry in a metric stratum.
Listener in charge of attending events for an InitialCamerasEstimator, such as when estimation starts or finishes.
Indicates method used for initial estimation of cameras.
Raised when an essential matrix is not well-defined.
Raised if a given matrix is not a valid fundamental matrix (i.e. 3x3 matrix having rank 2).
Raised if a given pair of cameras cannot span a valid epipolar geometry, typically because they are set in a degenerate configuration.
Raised when providing an invalid pair of intrinsic parameters to define an essential matrix.
Exception produced when plane at infinity cannot be determined.
Raised if given rotation or translation are not valid to define an essential matrix.
Exception produced when provided transformation is numerically unstable.
Class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences from multiple views and known initial camera baseline (camera separation), so that cameras and reconstructed points are obtained with exact scale.
Contains configuration for a multiple view sparse re-constructor assuming that the initial baseline (separation between initial cameras) is known.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences when baseline is known.
Class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views and known camera baseline (camera separation), so that both cameras and reconstructed points are obtained with exact scale.
Contains configuration for a two view sparse re-constructor assuming that the baseline (separation between cameras) is known.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views when baseline is known.
Estimates the DIAC (Dual Image of Absolute Conic) by solving Kruppa's equations and assuming known principal point and zero skewness.
Thrown when DIAC estimation fails.
Listener to be notified when estimation starts, finishes or any progress changes.
Finds the best Dual Absolute Quadric (DAQ) for provided collection of cameras using LMedS algorithm.
Finds the best fundamental matrix for provided collections of matched 2D points using LMedS algorithm.
Finds the best Image of AbsoluteConic (IAC) for provided collection of homographies (2D transformations) using LMedS algorithm.
Finds the best radial distortion for provided collection of 2D points using LMedS algorithm
Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using LMedS algorithm.
Implementation of a Dual Absolute Quadric estimator using an LMSE (Least Mean Squared Error) solution for provided pinhole cameras.
Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding an LMSE solution to homogeneous systems of equations.
This class defines an LMSE (the Least Mean Square Error) estimator of Image of Absolute Conic (IAC).
Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding an LMSE solution to homogeneous systems of equations.
This class defines an LMSE (the Least Mean Square Error) estimator of radial distortion.
Contains data relating matched 2D points and their reconstructions.
Finds the best dual absolute quadric (DAQ) for provided collection of cameras using MSAC algorithm.
Finds the best fundamental matrix for provided collections of matched 2D points using RANSAC algorithm.
Finds the best image of absolute conic (IAC) for provided collection of homographies (2D transformations) using MSAC algorithm.
Finds the best radial distortion for provided collections of 2D points using MSAC algorithm.
Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using MSAC algorithm.
Class in charge of estimating pairs of cameras and 3D reconstruction points from sparse image point correspondences.
Contains configuration for a paired view sparse re-constructor.
Listener to retrieve and and store required data to compute a 3D reconstruction from sparse image point correspondences.
Abstract representation of the 2D samples contained in a pattern.
Enumerator indicating pattern type.
This class takes matched pairs of 2D points corresponding to a planar scene, estimates an homography relating both sets of points, decomposes such homography induced by the 3D plane on the scene, and uses such decomposition to determine the best epipolar geometry (e.g. fundamental matrix) by using the essential matrix and provided intrinsic camera parameters on both views corresponding to both sets of points to reconstruct points and choose the solution that produces the largest amount of points located in front of both cameras.
Listener to be notified of events generated by a planar best fundamental matrix estimator and re-constructor.
This class takes an input 2D homography (e.g. transformation) and a given pair of intrinsic parameters for left and right views, and estimates all possible fundamental matrices generating such homography in a planar scene.
Listener to be notified of events generated by a planar fundamental matrix estimator.
Raised if triangulation of 3D points fails for some reason (i.e. degenerate geometry, numerical instabilities, etc).
Type of 3D point triangulator.
Contains color information for a given point.
Utility class to predict position of device.
Finds the best dual absolute quadric (DAQ) for provided collection of cameras using PROMedS algorithm.
Finds the best fundamental matrix for provided collections of matched 2D points using PROMedS algorithm.
Finds the best image of absolute conic (IAC) for provided collection of homographies (2D transformations) using PROMedS algorithm.
Finds the best radial distortion for provided collections of 2D points using PROMedS algorithm.
Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using PROMedS algorithm.
Finds the best dual absolute quadric (DAQ) for provided collection of cameras using PROSAC algorithm.
Finds the best fundamental matrix for provided collections of matched 2D points using PROSAC algorithm.
Finds the best image of absolute conic (IAC) for provided collection of homographies (2D transformations) using PROSAC algorithm.
Finds the best radial distortion for provided collections of 2D points using PROSAC algorithm.
Robustly triangulates 3D points from matched 2D points and their corresponding cameras on several views using PROSAC algorithm.
Contains coordinates of ideal points for a QR code pattern version 2.
Utility class to predict rotations.
Class implementing Brown's radial distortion.
This class defines the interface for an estimator of radial distortion
Thrown when radial distortion estimation fails.
Listener to be notified when estimation starts, finishes or any progress changes
Defines types of radial distortion estimators depending on their implementation.
Raised when an error occurs while using a RadialDistortion.
This is an abstract class for algorithms to robustly find the best RadialDistortion for provided collections of matched distorted and undistorted 2D points.
Listener to be notified of events such as when estimation starts, ends or when progress changes.
Finds the best dual absolute quadric (DAQ) using RANSAC algorithm.
Finds the best fundamental matrix for provided collections of matched 2D points using RANSAC algorithm.
Finds the best Image of Absolute Conic (IAC) using RANSAC algorithm.
Finds the best radial distortion for provided collections of 2D points using RANSAC algorithm
Robustly triangulates 3D points from matched 2D points and their.
Contains data of a reconstructed 3D point.
Exception raised if a re-constructor fails or is cancelled.
Base class to refine the epipole of a fundamental matrix formed by an initial epipole estimation and an estimated homography.
Abstract class for algorithms to robustly triangulate 3D points from matched 2D points and their corresponding cameras on several views.
Listener to be notified of events such as when triangulation starts, ends or when progress changes.
Contains data of a 2D point sample on a given view.
Fixes matched pairs of points so that they perfectly follow a given epipolar geometry.
Fixes a single matched pair of points so that they perfectly follow a given epipolar geometry using the Sampson approximation.
Non-robust fundamental matrix estimator that uses 7 matched 2D points on left and right views.
Fixes a single matched pair of points so that they perfectly follow a given epipolar geometry.
This class estimate intrinsic and extrinsic (rotation and camera center) parameters of a camera by using provided homography.
Thrown when camera estimation fails.
Listener to be notified when estimation starts or finishes.
Base class to triangulate matched 2D points into a single 3D one by using 2D points correspondences on different views along with the corresponding cameras on each of those views.
Handles events generated by a SinglePoint3DTriangulator.
Contains control calibration data for a SLAM estimator during Kalman filtering prediction stage.
Processes data to estimate calibration for SLAM estimator.
Estimates position, velocity, acceleration, orientation and angular speed using data from accelerometer and gyroscope.
Base exception for all exceptions related to SLAM.
Estimates pairs of cameras and 3D reconstructed points from sparse image point correspondences in multiple view pairs and using SLAM (with accelerometer and gyroscope data) for overall scale estimation.
Contains configuration for a paired view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between initial cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple view pairs and using SLAM for scale estimation.
Estimates cameras and 3D reconstructed points from sparse image point correspondences in multiple views and using SLAM (with accelerometer and gyroscope data) for overall scale estimation.
Contains configuration for a multiple view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between initial cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in multiple views and using SLAM for scale estimation.
Estimates cameras and 3D reconstructed points from sparse image point correspondences in two views and using SLAM (with accelerometer and gyroscope data) for overall scale estimation.
Contains configuration for a two view sparse re-constructor using SLAM (Simultaneous Location And Mapping) to determine the scale of the scene (i.e. the baseline or separation between cameras) by fusing both camera data and data from sensors like an accelerometer or gyroscope.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views and using SLAM for scale estimation.
Class in charge of estimating cameras and 3D reconstruction points from sparse image point correspondences.
Contains configuration for a multiple view sparse re-constructor.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences.
Utility class to predict device state (position, orientation, linear velocity, linear acceleration and angular velocity).
Base exception for all exceptions in the com.irurueta.ar.sfm package.
Class in charge of estimating cameras and 3D reconstructed points from sparse image point correspondences in two views.
Contains configuration for a two view sparse re-constructor.
Listener to retrieve and store required data to compute a 3D reconstruction from sparse image point correspondences in two views.
Utility class to predict velocity of device.
Implementation of a Dual Absolute Quadric estimator using a weighted solution for provided pinhole cameras.
Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding a weighted solution to homogeneous systems of equations.
This class implements an Image of Absolute Conic (IAC) estimator using a weighted algorithm and correspondences.
Triangulates matched 2D points into a single 3D one by using 2D point correspondences on different views along with the corresponding cameras on each of those views by finding a weighted solution to an inhomogeneous system of equations.
This class implements a radial distortion estimator using a weighted algorithm and correspondences.