Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 319
Filter
1.
Orthod Craniofac Res ; 27(5): 803-812, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38746976

ABSTRACT

OBJECTIVES: In addition to studying facial anatomy, stereophotogrammetry is an efficient diagnostic tool for assessing facial expressions through 3D video recordings. Current technology produces high-quality recordings but also generates extremely excessive data. Here, we compare various recording speeds for three standardized movements using the 3dMDface camera system, to assess its accuracy and reliability. MATERIALS AND METHODS: A linear and two circular movements were performed using a 3D-printed cube mounted on a robotic arm. All movements were recorded initially at 60 fps (frames/second) and then at 30 and 15 fps. Recording accuracy was tested with best-fit superimpositions of consecutive frames of the 3D cube and calculation of the Mean Absolute Distance (MAD). The reliability of the recordings were tested with evaluation of the inter- and intra-examiner error. RESULTS: The accuracy of movement recordings was excellent at all speeds (60, 30 and 15 fps), with variability in MAD values consistently being less than 1 mm. The reliability of the camera recordings was excellent at all recording speeds. CONCLUSIONS: This study demonstrated that 3D recordings of facial expressions can be performed at 30 or even at 15 fps without significant loss of information. This considerably reduces the amount of produced data facilitating further processing and analyses.


Subject(s)
Facial Expression , Imaging, Three-Dimensional , Photogrammetry , Video Recording , Humans , Reproducibility of Results , Imaging, Three-Dimensional/methods , Photogrammetry/instrumentation
2.
Dermatol Surg ; 46(9): e23-e31, 2020 09.
Article in English | MEDLINE | ID: mdl-31809350

ABSTRACT

BACKGROUND: Three-dimensional (3D) imaging has become increasingly popular in aesthetic surgery. However, few studies have emphasized its application in the periocular region. OBJECTIVE: To provide evidence supporting the reliability of generalizing periocular measurements obtained using caliper-derived direct anthropometry and 2-dimensional (2D) photogrammetry to 3D stereophotogrammetry. MATERIALS AND METHODS: Periocular surfaces were captured using a stereophotogrammetry system for 46 normal Caucasian individuals. Twenty-two periocular variables were directly, 2-dimensionally, and 3-dimensionally measured. Reliability of these measurements was evaluated and compared with each other. RESULTS: The results revealed that, for direct (intra-rater reliability only), 2D, and 3D anthropometry, overall intra-rater and inter-rater intraclass correlation coefficient estimates were 0.88, 0.99 and 0.97, and 0.98 and 0.92, respectively; mean absolute differences were 0.84 mm, 0.26 and 0.36 units, and 0.35 and 0.67 units, respectively; technical error of measurement (TEM) estimates were 0.85 mm, 0.25 and 0.36 units, and 0.32 and 0.65 units, respectively; relative error measurement estimates were 6.46%, 1.69% and 2.74%, and 1.67% and 5.11%, respectively; and relative TEM estimates were 6.25%, 1.62% and 2.78%, and 2.12% and 5.12%, respectively. CONCLUSION: Stereophotogrammetry and the authors' landmark location protocol yield very good reliability for a series of 2D and 3D measurements.


Subject(s)
Anthropometry/methods , Face/diagnostic imaging , Imaging, Three-Dimensional/methods , Photogrammetry/methods , Adult , Anatomic Landmarks , Anthropometry/instrumentation , Face/anatomy & histology , Female , Humans , Imaging, Three-Dimensional/instrumentation , Male , Middle Aged , Photogrammetry/instrumentation , Reproducibility of Results , Young Adult
3.
Am J Med Genet A ; 179(8): 1459-1465, 2019 08.
Article in English | MEDLINE | ID: mdl-31134750

ABSTRACT

BACKGROUND: Growth retardation is one of the main hallmarks of CHARGE syndrome (CS), yet little is known about the body proportions of these children. Knowledge of body proportions in CS may contribute to a better characterization of this syndrome. This knowledge is important when considering starting growth-stimulating therapy. METHODS: For this cross-sectional study, we selected 32 children with CS and a CHD7 mutation at the Dutch CHARGE Family Day in 2016 or 2017 and the International CHARGE conference in Orlando, Florida, in 2017. We used photogrammetric anthropometry-a measurement method based on digital photographs-to determine various body proportions. We compared these to measurements in 21 normally proportioned children with growth hormone deficiency, using independent-samples t test, Mann-Whitney U test, or chi-square test as appropriate. RESULTS: Children with CS appear to have a shorter trunk in proportion to their height, head length, and arm length. Children with CS also had smaller feet proportional to tibia length compared to controls. The change of body proportions with age was similar in children with CS and controls. CONCLUSION: Body proportions in children with CS are significantly different from those of normally proportioned controls, but a similar change of body proportions with age was noted for both groups.


Subject(s)
Anthropometry/methods , CHARGE Syndrome/diagnosis , Photogrammetry/methods , Adolescent , Anthropometry/instrumentation , Body Height , CHARGE Syndrome/genetics , CHARGE Syndrome/pathology , Child , Child, Preschool , Cross-Sectional Studies , Female , Head/abnormalities , Humans , Male , Photogrammetry/instrumentation , Torso/abnormalities
4.
J Dairy Res ; 86(1): 34-39, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30773145

ABSTRACT

We address the hypothesis that individual cow feed intake can be measured in commercial farms through the use of a photogrammetry method. Feed intake and feed efficiency have a significant economic value for the farmer. A common method for measuring feed mass in research is a feed mass weighing system, which is excessively expensive for commercial farms. However, feed mass can be estimated by its volume, which can be measured by photogrammetry. Photogrammetry applies cameras along the feed-lane, photographing the feed before and after the cow visits the feed-lane, and calculating the feed volume. In this study, the precision of estimating feed mass by its volume was tested by comparing measured mass and calculated volume of feed heaps. The following principal factors had an impact on the precision of this method: camera quality, lighting conditions, image resolution, number of images, and feed density. Under laboratory conditions, the feed mass estimation error was 0·483 kg for heaps up to 7 kg, while in the cowshed the estimation error was 1·32 kg for up to 40 kg. A complementary experiment showed that the natural feed compressibility causes about 85% of uncertainty in the mass estimation error.


Subject(s)
Animal Feed , Cattle/physiology , Dairying/methods , Photogrammetry/veterinary , Animal Feed/analysis , Animal Feed/economics , Animal Feed/statistics & numerical data , Animals , Eating , Female , Monitoring, Physiologic/methods , Photogrammetry/instrumentation , Sensitivity and Specificity
5.
Sensors (Basel) ; 19(18)2019 Sep 12.
Article in English | MEDLINE | ID: mdl-31547455

ABSTRACT

Three Dimensional (3D) models are widely used in clinical applications, geosciences, cultural heritage preservation, and engineering; this, together with new emerging needs such as building information modeling (BIM) develop new data capture techniques and devices with a low cost and reduced learning curve that allow for non-specialized users to employ it. This paper presents a simple, self-assembly device for 3D point clouds data capture with an estimated base price under €2500; furthermore, a workflow for the calculations is described that includes a Visual SLAM-photogrammetric threaded algorithm that has been implemented in C++. Another purpose of this work is to validate the proposed system in BIM working environments. To achieve it, in outdoor tests, several 3D point clouds were obtained and the coordinates of 40 points were obtained by means of this device, with data capture distances ranging between 5 to 20 m. Subsequently, those were compared to the coordinates of the same targets measured by a total station. The Euclidean average distance errors and root mean square errors (RMSEs) ranging between 12-46 mm and 8-33 mm respectively, depending on the data capture distance (5-20 m). Furthermore, the proposed system was compared with a commonly used photogrammetric methodology based on Agisoft Metashape software. The results obtained demonstrate that the proposed system satisfies (in each case) the tolerances of 'level 1' (51 mm) and 'level 2' (13 mm) for point cloud acquisition in urban design and historic documentation, according to the BIM Guide for 3D Imaging (U.S. General Services).


Subject(s)
Algorithms , Archaeology/methods , Imaging, Three-Dimensional/methods , Photogrammetry/methods , Cloud Computing , Equipment Design , Imaging, Three-Dimensional/instrumentation , Photogrammetry/instrumentation , Software , Spain , Workflow
6.
Wound Repair Regen ; 26(6): 456-462, 2018 11.
Article in English | MEDLINE | ID: mdl-30118155

ABSTRACT

To monitor wound healing, it is essential to obtain accurate and reliable wound measurements. Various methods have been used to measure wound size including three-dimensional (3D) measurement devices enabling wound assessment from a volume perspective. However, the currently available methods are inaccurate, costly, or complicated to use. As a consequence, we have developed a 3D-wound assessment monitor (WAM) camera, which is able to measure wound size in three-dimension and to assess wound characteristics. The aim of the study was to assess the intrarater and interrater reliability of the 3D wound measurements using the 3D camera and to compare these with traditional measurement methods. Four raters measured 48 wounds using the 3D camera, digital imaging method (2D area), and gel injection into the wound cavity (volume). The data were analyzed using linear mixed effect model. Intraclass and interclass correlation coefficient (ICC) and Bland-Altman plots were used to assess intrarater and interrater reliability for the 3D camera and agreement between the methods. The Bland-Altman plots for intrarater reliability showed minor differences between the measurements, especially the 3D area and perimeter measurements. Moreover, ICCs were very high for both the intrarater and interrater reliability for the 2D area, 3D area, and perimeter measurements (ICCs > 0.99), although slightly lower for the volume measurements (ICC = 0.946-0.950). Finally, a high agreement was found between the 3D camera and the traditional methods (2D area and volume) assessed by narrow 95% prediction intervals and high ICCs above 0.97. In conclusion, the 3D-WAM camera is an accurate and reliable method, which is useful for several types of wounds. However, the volume measurements were primarily useful in large, deep wounds. Moreover, the 3D images are based on digital technology and therefore carry the possibility for use in remote settings.


Subject(s)
Imaging, Three-Dimensional/instrumentation , Imaging, Three-Dimensional/standards , Photogrammetry/instrumentation , Photogrammetry/standards , Wound Healing/physiology , Wounds and Injuries/diagnostic imaging , Wounds and Injuries/pathology , Adult , Female , Humans , Male , Middle Aged , Observer Variation , Reproducibility of Results , Skin Physiological Phenomena
7.
J Oral Maxillofac Surg ; 76(8): 1772-1784, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29458028

ABSTRACT

PURPOSE: Modern 3-dimensional (3D) image acquisition systems represent a crucial technologic development in facial anatomy because of their accuracy and precision. The recently introduced portable devices can improve facial databases by increasing the number of applications. In the present study, the VECTRA H1 portable stereophotogrammetric device was validated to verify its applicability to 3D facial analysis. MATERIALS AND METHODS: Fifty volunteers underwent 4 facial scans using portable VECTRA H1 and static VECTRA M3 devices (2 for each instrument). Repeatability of linear, angular, surface area, and volume measurements was verified within the device and between devices using the Bland-Altman test and the calculation of absolute and relative technical errors of measurement (TEM and rTEM, respectively). In addition, the 2 scans obtained by the same device and the 2 scans obtained by different devices were registered and superimposed to calculate the root mean square (RMS; point-to-point) distance between the 2 surfaces. RESULTS: Most linear, angular, and surface area measurements had high repeatability in M3 versus M3, H1 versus H1, and M3 versus H1 comparisons (range, 82.2 to 98.7%; TEM range, 0.3 to 2.0 mm, 0.4° to 1.8°; rTEM range, 0.2 to 3.1%). In contrast, volumes and RMS distances showed evident differences in M3 versus M3 and H1 versus H1 comparisons and reached the maximum when scans from the 2 different devices were compared. CONCLUSION: The portable VECTRA H1 device proved reliable for assessing linear measurements, angles, and surface areas; conversely, the influence of involuntary facial movements on volumes and RMS distances was more important compared with the static device.


Subject(s)
Face/diagnostic imaging , Photogrammetry/instrumentation , Point-of-Care Systems , Adult , Female , Healthy Volunteers , Humans , Imaging, Three-Dimensional/instrumentation , Male , Middle Aged , Reproducibility of Results
8.
J Craniofac Surg ; 29(5): 1261-1265, 2018 Jul.
Article in English | MEDLINE | ID: mdl-29521745

ABSTRACT

The usefulness of three-dimensional (3D) stereophotogrammetry for treating cleft lip (CL) has been well documented. However, there are only a few reliable anthropometric analyses in infants with CL because at this age they cannot assume a resting facial position. Since 2014, we have used a handheld 3D imaging system in the operating room to obtain optimal images of infants with CL and palate under general anesthesia. Currently, 168 infants with a unilateral cleft, 50 infants with bilateral clefts, and 47 infants with an isolated cleft palate are being followed up in this way for a maximum of 30 months. Most patients ≥3 years of age are cooperative and allow staff to obtain 3D images without sedation. We plan to follow them until adulthood, obtaining 3D images at every intervention. Each year, >150 infants can be added to this ongoing longitudinal study. Using an archive of these digital images, various retrospective studies can be attempted in the future, which include comparisons of the long-term outcomes of various surgical techniques and interventions at different time intervals. This is the first 2-year preliminary report of a 20-year longitudinal study.


Subject(s)
Cleft Lip/diagnostic imaging , Cleft Palate/diagnostic imaging , Imaging, Three-Dimensional/methods , Photogrammetry/instrumentation , Anthropometry , Child, Preschool , Cleft Lip/surgery , Cleft Palate/surgery , Follow-Up Studies , Humans , Imaging, Three-Dimensional/instrumentation , Infant , Longitudinal Studies , Retrospective Studies
9.
J Prosthet Dent ; 120(2): 232-241, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29559220

ABSTRACT

STATEMENT OF PROBLEM: Conventional impression techniques to obtain a definitive cast for a complete-arch implant-supported prosthesis are technique-sensitive and time-consuming. Direct optical recording with a camera could offer an alternative to conventional impression making. PURPOSE: The purpose of this in vitro study was to test a novel intraoral image capture protocol to obtain 3-dimensional (3D) implant spatial measurement data under simulated oral conditions of vertical opening and lip retraction. MATERIAL AND METHODS: A mannequin was assembled simulating the intraoral conditions of a patient having an edentulous mandible with 5 interforaminal implants. Simulated mouth openings with 2 interincisal openings (35 mm and 55 mm) and 3 lip retractions (55 mm, 75 mm, and 85 mm) were evaluated to record the implant positions. The 3D spatial orientations of implant replicas embedded in the reference model were measured using a coordinate measuring machine (CMM) (control). Five definitive casts were made with a splinted conventional impression technique of the reference model. The positions of the implant replicas for each of the 5 casts were measured with a Nobel Procera Scanner (conventional digital method). For the prototype, optical targets were secured to the implant replicas, and 3 sets of 12 images each were recorded for the photogrammetric process of 6 groups of retractions and openings using a digital camera and a standardized image capture protocol. Dimensional data were imported into photogrammetry software (photogrammetry method). The calculated and/or measured precision and accuracy of the implant positions in 3D space for the 6 groups were compared with 1-way ANOVA with an F-test (α=.05). RESULTS: The precision (standard error [SE] of measurement) for CMM was 3.9 µm (95% confidence interval [CI] 2.7 to 7.1 µm). For the conventional impression method, the SE of measurement was 17.2 µm (95% CI 10.3 to 49.4 µm). For photogrammetry, a grand mean was calculated for groups MinR-AvgO, MinR-MaxO, AvgR-AvgO, and MaxR-AvgO obtaining a value of 26.8 µm (95% CI 18.1 to 51.4 µm). The overall linear measurement error for accurately locating the top center points (TCP) followed a similar pattern as for precision. CMM (coordinate measurement machine) measurement represents the nonclinical gold standard, with an average error TCP distance of 4.6 µm (95% CI 3.5 to 6 µm). All photogrammetry groups presented an accuracy that ranged from 63 µm (SD 17.6) to 47 µm (SD 9.2). The grand mean of accuracy was calculated as 55.2 µm (95% CI 8.8 to 130.8 µm). CONCLUSIONS: The CMM group (control) demonstrated the highest levels of accuracy and precision. Most of the groups with the photogrammetric method were statistically similar to the conventional group except for groups AvgR-MaxO and MaxR-MaxO, which represented maximum opening with average retraction and maximum opening with maximum retraction.


Subject(s)
Computer Simulation , Dental Impression Technique , Dental Prosthesis, Implant-Supported , Image Processing, Computer-Assisted/methods , Photogrammetry/methods , Dental Arch , Dental Casting Technique , Dental Implants , Dental Prosthesis Design , Denture Design/methods , Humans , Imaging, Three-Dimensional/methods , Jaw, Edentulous , Mandible/diagnostic imaging , Models, Dental , Photogrammetry/instrumentation
10.
J Microsc ; 267(3): 356-370, 2017 09.
Article in English | MEDLINE | ID: mdl-28474765

ABSTRACT

In the last few years, the study of cut marks on bone surfaces has become fundamental for the interpretation of prehistoric butchery practices. Due to the difficulties in the correct identification of cut marks, many criteria for their description and classification have been suggested. Different techniques, such as three-dimensional digital microscope (3D DM), laser scanning confocal microscopy (LSCM) and micro-photogrammetry (M-PG) have been recently applied to the study of cut marks. Although the 3D DM and LSCM microscopic techniques are the most commonly used for the 3D identification of cut marks, M-PG has also proved to be very efficient and a low-cost method. M-PG is a noninvasive technique that allows the study of the cortical surface without any previous preparation of the samples, and that generates high-resolution models. Despite the current application of microscopic and micro-photogrammetric techniques to taphonomy, their reliability has never been tested. In this paper, we compare 3D DM, LSCM and M-PG in order to assess their resolution and results. In this study, we analyse 26 experimental cut marks generated with a metal knife. The quantitative and qualitative information registered is analysed by means of standard multivariate statistics and geometric morphometrics to assess the similarities and differences obtained with the different methodologies.


Subject(s)
Imaging, Three-Dimensional , Microscopy, Confocal , Models, Statistical , Photogrammetry , Analysis of Variance , Image Processing, Computer-Assisted , Microscopy, Confocal/instrumentation , Microscopy, Confocal/methods , Microscopy, Confocal/standards , Photogrammetry/instrumentation , Photogrammetry/methods , Photogrammetry/standards , Reproducibility of Results
11.
Orthod Craniofac Res ; 20 Suppl 1: 119-124, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28643910

ABSTRACT

OBJECTIVES: To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. SETTING AND SAMPLE POPULATION: UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. MATERIAL AND METHODS: Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. RESULTS: Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. CONCLUSIONS: Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation.


Subject(s)
Anthropometry/instrumentation , Head/anatomy & histology , Imaging, Three-Dimensional/instrumentation , Orthodontics , Photogrammetry/instrumentation , Humans , Image Processing, Computer-Assisted/methods , Manikins , Reproducibility of Results
12.
Forensic Sci Med Pathol ; 13(1): 34-43, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28144846

ABSTRACT

Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.


Subject(s)
Forensic Pathology/instrumentation , Imaging, Three-Dimensional , Photogrammetry/instrumentation , Tomography, X-Ray Computed , Whole Body Imaging/instrumentation , Humans , Photogrammetry/methods
13.
Sensors (Basel) ; 16(2): 217, 2016 Feb 06.
Article in English | MEDLINE | ID: mdl-26861351

ABSTRACT

We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.


Subject(s)
Aircraft , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/instrumentation , Photography/methods , Equipment Design , Image Enhancement , Photogrammetry/instrumentation
14.
Sensors (Basel) ; 16(2): 153, 2016 Jan 26.
Article in English | MEDLINE | ID: mdl-26821027

ABSTRACT

Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor's uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process.


Subject(s)
Biosensing Techniques/instrumentation , Human Body , Pelvis/physiology , Photogrammetry/instrumentation , Biomechanical Phenomena , Computer Simulation , Humans , Magnetic Fields
15.
Aesthet Surg J ; 36(4): 379-87, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26628536

ABSTRACT

BACKGROUND: Surgical rejuvenation alters facial volume distribution to achieve more youthful aesthetic contours. These changes are routinely compared subjectively. The introduction of 3-dimensional (3D) stereophotogrammetry provides a novel method for measuring and comparing surgical results. OBJECTIVES: We sought to quantify how specific facial areas are changed after rejuvenation surgery using the 3D camera. METHODS: Patients undergoing facial rejuvenation were imaged preoperatively and postoperatively with 3D stereophotogrammetry. Images were registered using facial surface landmarks unaltered by surgery. Colorimetric 3D analysis depicting postoperative volume changes was performed utilizing the 3D imaging software and quantitative volume measurements were constructed. RESULTS: Nine patients who underwent combined facelift procedures and fat grafting were evaluated. Median time for postoperative imaging was 4.8 months. Positive changes in facial volume occurred in the forehead, temples, and cheeks (median changes, 0.9 mL ± 4.3 SD; 0.8 mL ± 0.47 SD; and 1.4 mL ± 1.6 SD, respectively). Negative changes in volume occurred in the nasolabial folds, marionette basins, and neck/submental regions (median changes, -1.0 mL ± 0.37 SD; -0.4 mL ± 0.9 SD; and -2.0 mL ± 4.3 SD, respectively). CONCLUSIONS: The technique of 3D stereophotogrammetry provides a tool for quantifying facial volume distribution after rejuvenation procedures. Areas of consistent volume increase include the forehead, temples, and cheeks; areas of negative volume change occur in the nasolabial folds, marionette basins, and submental/chin regions. This technology may be utilized to better understand the dynamic changes that occur with facial rejuvenation and quantify longevity of various rejuvenation techniques. LEVEL OF EVIDENCE: 4 Diagnostic.


Subject(s)
Cosmetic Techniques , Face/surgery , Imaging, Three-Dimensional/instrumentation , Photogrammetry/instrumentation , Plastic Surgery Procedures , Rejuvenation , Skin Aging , Adipose Tissue/transplantation , Age Factors , Aged , Anatomic Landmarks , Esthetics , Face/anatomy & histology , Female , Humans , Lipectomy , Middle Aged , Rhytidoplasty , Software , Time Factors , Transplantation, Autologous , Treatment Outcome
16.
Sensors (Basel) ; 15(12): 30261-9, 2015 Dec 03.
Article in English | MEDLINE | ID: mdl-26633423

ABSTRACT

Imaging systems have an indisputable role in revealing vegetation posture under diverse flow conditions, image sequences being generated with off the shelf digital cameras. Such sensors are cheap but introduce a range of distortion effects, a trait only marginally tackled in hydraulic studies focusing on water-vegetation dependencies. This paper aims to bridge this gap by presenting a simple calibration method to remove both camera lens distortion and refractive effects of water. The effectiveness of the method is illustrated using the variable projected area, computed for both simple and complex shaped objects. Results demonstrate the significance of correcting images using a combined lens distortion and refraction model, prior to determining projected areas and further data analysis. Use of this technique is expected to increase data reliability for future work on vegetated channels.


Subject(s)
Aquatic Organisms/physiology , Photogrammetry/instrumentation , Plants/anatomy & histology , Endoscopes , Equipment Design , Photogrammetry/methods , Photogrammetry/standards , Water/chemistry
17.
Sensors (Basel) ; 15(8): 19688-708, 2015 Aug 12.
Article in English | MEDLINE | ID: mdl-26274960

ABSTRACT

Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.


Subject(s)
Plant Weeds/physiology , Remote Sensing Technology/methods , Satellite Imagery/methods , Photogrammetry/instrumentation
18.
J Oral Maxillofac Surg ; 72(11): 2256-61, 2014 Nov.
Article in English | MEDLINE | ID: mdl-24856955

ABSTRACT

The purpose of this study was to develop a technique to record physical references and orient digital mesh models to a natural head position using stereophotogrammetry (SP). The first step was to record the digital mesh model of a hanging reference board placed at the capturing position of the SP machine. The board was aligned to true vertical using a plumb bob. It also was aligned with a laser plane parallel to a hanging mirror, which was located at the center of the machine. The parameter derived from the digital mesh model of the board was used to adjust the roll, pitch, and yaw of the subsequent captures of patients' facial images. This information was valid until the next machine calibration. The board placement was repeatable, with standard deviations less than 0.1° for pitch and yaw angles and 0.15° for roll angles.


Subject(s)
Head , Photogrammetry/methods , Humans , Imaging, Three-Dimensional , Patient Positioning , Photogrammetry/instrumentation , Reproducibility of Results
19.
Sensors (Basel) ; 14(9): 17471-90, 2014 Sep 18.
Article in English | MEDLINE | ID: mdl-25237898

ABSTRACT

The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.


Subject(s)
Photogrammetry/instrumentation , Software , Video Recording/instrumentation , Calibration , Humans , Image Processing, Computer-Assisted
20.
Sensors (Basel) ; 14(8): 15084-112, 2014 Aug 18.
Article in English | MEDLINE | ID: mdl-25196012

ABSTRACT

Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.


Subject(s)
Image Processing, Computer-Assisted/instrumentation , Image Processing, Computer-Assisted/methods , Photogrammetry/instrumentation , Photogrammetry/methods , Calibration , Computer Simulation , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL