Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 22(18)2022 Sep 06.
Article in English | MEDLINE | ID: mdl-36146070

ABSTRACT

Computer-aided diagnosis (CAD) systems can be used to process breast ultrasound (BUS) images with the goal of enhancing the capability of diagnosing breast cancer. Many CAD systems operate by analyzing the region-of-interest (ROI) that contains the tumor in the BUS image using conventional texture-based classification models and deep learning-based classification models. Hence, the development of these systems requires automatic methods to localize the ROI that contains the tumor in the BUS image. Deep learning object-detection models can be used to localize the ROI that contains the tumor, but the ROI generated by one model might be better than the ROIs generated by other models. In this study, a new method, called the edge-based selection method, is proposed to analyze the ROIs generated by different deep learning object-detection models with the goal of selecting the ROI that improves the localization of the tumor region. The proposed method employs edge maps computed for BUS images using the recently introduced Dense Extreme Inception Network (DexiNed) deep learning edge-detection model. To the best of our knowledge, our study is the first study that has employed a deep learning edge-detection model to detect the tumor edges in BUS images. The proposed edge-based selection method is applied to analyze the ROIs generated by four deep learning object-detection models. The performance of the proposed edge-based selection method and the four deep learning object-detection models is evaluated using two BUS image datasets. The first dataset, which is used to perform cross-validation evaluation analysis, is a private dataset that includes 380 BUS images. The second dataset, which is used to perform generalization evaluation analysis, is a public dataset that includes 630 BUS images. For both the cross-validation evaluation analysis and the generalization evaluation analysis, the proposed method obtained the overall ROI detection rate, mean precision, mean recall, and mean F1-score values of 98%, 0.91, 0.90, and 0.90, respectively. Moreover, the results show that the proposed edge-based selection method outperformed the four deep learning object-detection models as well as three baseline-combining methods that can be used to combine the ROIs generated by the four deep learning object-detection models. These findings suggest the potential of employing our proposed method to analyze the ROIs generated using different deep learning object-detection models to select the ROI that improves the localization of the tumor region.


Subject(s)
Breast Neoplasms , Deep Learning , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Diagnosis, Computer-Assisted , Female , Humans , Ultrasonography, Mammary/methods
2.
Med Phys ; 49(8): 4999-5013, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35608237

ABSTRACT

BACKGROUND: Ultrasound is employed in needle interventions to visualize the anatomical structures and track the needle. Nevertheless, needle detection in ultrasound images is a difficult task, specifically at steep insertion angles. PURPOSE: A new method is presented to enable effective needle detection using ultrasound B-mode and power Doppler analyses. METHODS: A small buzzer is used to excite the needle and an ultrasound system is utilized to acquire B-mode and power Doppler images for the needle. The B-mode and power Doppler images are processed using Radon transform and local-phase analysis to initially detect the axis of the needle. The detection of the needle axis is improved by processing the power Doppler image using alpha shape analysis to define a region of interest (ROI) that contains the needle. Also, a set of feature maps is extracted from the ROI in the B-mode image. The feature maps are processed using a machine learning classifier to construct a likelihood image that visualizes the posterior needle likelihoods of the pixels. Radon transform is applied to the likelihood image to achieve an improved needle axis detection. Additionally, the region in the B-mode image surrounding the needle axis is analyzed to identify the needle tip using a custom-made probabilistic approach. Our method was utilized to detect needles inserted in ex vivo animal tissues at shallow [ 20 ∘ - 40 ∘ $20^\circ -40^\circ$ ), moderate [ 40 ∘ - 60 ∘ $40^\circ -60^\circ$ ), and steep [ 60 ∘ - 85 ∘ $60^\circ -85^\circ$ ] angles. RESULTS: Our method detected the needles with failure rates equal to 0% and mean angle, axis, and tip errors less than or equal to 0.7°, 0.6 mm, and 0.7 mm, respectively. Additionally, our method achieved favorable results compared to two recently introduced needle detection methods. CONCLUSIONS: The results indicate the potential of applying our method to achieve effective needle detection in ultrasound images.


Subject(s)
Needles , Radon , Animals , Ultrasonography/methods , Ultrasonography, Doppler , Ultrasonography, Interventional
3.
PeerJ Comput Sci ; 7: e498, 2021.
Article in English | MEDLINE | ID: mdl-33977136

ABSTRACT

Several higher education institutions have harnessed e-learning tools to empower the application of different learning models that enrich the educational process. Nevertheless, the reliance on commercial or open-source platforms, in some cases, to deliver e-learning could impact system acceptability, usability, and capability. Therefore, this study suggests design methods to develop effective learning management capabilities such as attendance, coordination, course folder, course section homepage, learning materials, syllabus, emails, and student tracking within a university portal named MyGJU. In particular, mechanisms to facilitate system setup, data integrity, information security, e-learning data reuse, version control automation, and multi-user collaboration have been applied to enable the e-learning modules in MyGJU to overcome some of the drawbacks of their counterparts in Moodle. Such system improvements are required to motivate both educators and students to engage in online learning. Besides, features comparisons between MyGJU with Moodle and in-house systems have been conducted for reference. Also, the system deployment outcomes and user survey results confirm the wide acceptance among instructors and students to use MyGJU as a first point of contact, as opposed to Moodle, for basic e-learning tasks. Further, the results illustrate that the in-house e-learning modules in MyGJU are engaging, easy to use, useful, and interactive.

4.
Data Brief ; 33: 106534, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33299909

ABSTRACT

The aim of this paper is to present a dataset for Wi-Fi-based human activity recognition. The dataset is comprised of five experiments performed by 30 different subjects in three different indoor environments. The experiments performed in the first two environments are of a line-of-sight (LOS) nature, while the experiments performed in the third environment are of a non-line-of-sight (NLOS) nature. Each subject performed 20 trials for each of the experiments which makes the overall number of recorded trials in the dataset equals to 3000 trials (30 subjects × 5 experiments × 20 trials). To record the data, we used the channel state information (CSI) tool [1] to capture the exchanged Wi-Fi packets between a Wi-Fi transmitter and receiver. The utilized transmitter and receiver are retrofitted with the Intel 5300 network interface card which enabled us to capture the CSI values that are contained in the recorded transmissions. Unlike other publicly available human activity datasets, this dataset provides researchers with the ability to test their developed methodologies on both LOS and NLOS environments, in addition to many different variations of human movements, such as walking, falling, turning, and pen pick up from the ground.

5.
Sensors (Basel) ; 20(23)2020 Nov 30.
Article in English | MEDLINE | ID: mdl-33265900

ABSTRACT

This study aims to enable effective breast ultrasound image classification by combining deep features with conventional handcrafted features to classify the tumors. In particular, the deep features are extracted from a pre-trained convolutional neural network model, namely the VGG19 model, at six different extraction levels. The deep features extracted at each level are analyzed using a features selection algorithm to identify the deep feature combination that achieves the highest classification performance. Furthermore, the extracted deep features are combined with handcrafted texture and morphological features and processed using features selection to investigate the possibility of improving the classification performance. The cross-validation analysis, which is performed using 380 breast ultrasound images, shows that the best combination of deep features is obtained using a feature set, denoted by CONV features that include convolution features extracted from all convolution blocks of the VGG19 model. In particular, the CONV features achieved mean accuracy, sensitivity, and specificity values of 94.2%, 93.3%, and 94.9%, respectively. The analysis also shows that the performance of the CONV features degrades substantially when the features selection algorithm is not applied. The classification performance of the CONV features is improved by combining these features with handcrafted morphological features to achieve mean accuracy, sensitivity, and specificity values of 96.1%, 95.7%, and 96.3%, respectively. Furthermore, the cross-validation analysis demonstrates that the CONV features and the combined CONV and morphological features outperform the handcrafted texture and morphological features as well as the fine-tuned VGG19 model. The generalization performance of the CONV features and the combined CONV and morphological features is demonstrated by performing the training using the 380 breast ultrasound images and the testing using another dataset that includes 163 images. The results suggest that the combined CONV and morphological features can achieve effective breast ultrasound image classifications that increase the capability of detecting malignant tumors and reduce the potential of misclassifying benign tumors.


Subject(s)
Breast Neoplasms , Deep Learning , Ultrasonography , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Female , Humans , Neural Networks, Computer
6.
Data Brief ; 31: 105668, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32462061

ABSTRACT

This paper presents a dataset for Wi-Fi-based human-to-human interaction recognition that comprises twelve different interactions performed by 40 different pairs of subjects in an indoor environment. Each pair of subjects performed ten trials of each of the twelve interactions and the total number of trials recorded in our dataset for all the 40 pairs of subjects is 4800 trials (i.e., 40 pairs of subjects × 12 interactions × 10 trials). The publicly available CSI tool [1] is used to record the Wi-Fi signals transmitted from a commercial off-the-shelf access point, namely the Sagemcom 2704 access point, to a desktop computer that is equipped with an Intel 5300 network interface card. The recorded Wi-Fi signals consist of the Received Signal Strength Indicator (RSSI) values and the Channel State Information (CSI) values. Unlike the publicly available Wi-Fi-based human activity datasets, which mainly have focused on activities performed by a single human, our dataset provides a collection of Wi-Fi signals that are recorded for 40 different pairs of subjects while performing twelve two-person interactions. The presented dataset can be exploited to advance Wi-Fi-based human activity recognition in different aspects, such as the use of various machine learning algorithms to recognize different human-to-human interactions.

7.
Sensors (Basel) ; 20(8)2020 Apr 24.
Article in English | MEDLINE | ID: mdl-32344557

ABSTRACT

Game-based rehabilitation systems provide an effective tool to engage cerebral palsy patients in physical exercises within an exciting and entertaining environment. A crucial factor to ensure the effectiveness of game-based rehabilitation systems is to assess the correctness of the movements performed by the patient during the game-playing sessions. In this study, we propose a game-based rehabilitation system for upper-limb cerebral palsy that includes three game-based exercises and a computerized assessment method. The game-based exercises aim to engage the participant in shoulder flexion, shoulder horizontal abduction/adduction, and shoulder adduction physical exercises that target the right arm. Human interaction with the game-based rehabilitation system is achieved using a Kinect sensor that tracks the skeleton joints of the participant. The computerized assessment method aims to assess the correctness of the right arm movements during each game-playing session by analyzing the tracking data acquired by the Kinect sensor. To evaluate the performance of the computerized assessment method, two groups of participants volunteered to participate in the game-based exercises. The first group included six cerebral palsy children and the second group included twenty typically developing subjects. For every participant, the computerized assessment method was employed to assess the correctness of the right arm movements in each game-playing session and these computer-based assessments were compared with matching gold standard evaluations provided by an experienced physiotherapist. The results reported in this study suggest the feasibility of employing the computerized assessment method to evaluate the correctness of the right arm movements during the game-playing sessions.


Subject(s)
Cerebral Palsy/therapy , Stroke Rehabilitation/methods , Child , Child, Preschool , Exercise Therapy/methods , Female , Humans , Joints/physiology , Male , Shoulder/physiology , Skeleton/physiology , Upper Extremity/physiology
8.
Med Phys ; 47(6): 2356-2379, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32160309

ABSTRACT

PURPOSE: Ultrasound imaging is used in many minimally invasive needle insertion procedures to track the advancing needle, but localizing the needle in ultrasound images can be challenging, particularly at steep insertion angles. Previous methods have been introduced to localize the needle in ultrasound images, but the majority of these methods are based on ultrasound B-mode image analysis that is affected by the needle visibility. To address this limitation, we propose a two-phase, signature-based method to achieve reliable and accurate needle localization in curvilinear ultrasound images based on the beamformed radio frequency (RF) signals that are acquired using conventional ultrasound imaging systems. METHODS: In the first phase of our proposed method, the beamformed RF signals are divided into overlapping segments and these segments are processed to extract needle-specific features to identify the needle echoes. The features are analyzed using a support vector machine classifier to synthesize a quantitative image that highlights the needle. The quantitative image is processed using the Radon transform to achieve a reliable and accurate signature-based estimation of the needle axis. In the second phase, the accuracy of the needle axis estimation is improved by processing the RF samples located around the signature-based estimation of the needle axis using local phase analysis combined with the Radon transform. Moreover, a probabilistic approach is employed to identify the needle tip. The proposed method is used to localize needles with two different sizes inserted in ex vivo animal tissue specimens at various insertion angles. RESULTS: Our proposed method achieved reliable and accurate needle localization for an extended range of needle insertion angles with failure rates of 0% and mean angle, axis, and tip errors smaller than or equal to 0 . 7 ∘ , 0.6 mm, and 0.7 mm, respectively. Moreover, our proposed method outperformed a recently introduced needle localization method that is based on B-mode image analysis. CONCLUSIONS: These results suggest the potential of employing our signature-based method to achieve reliable and accurate needle localization during ultrasound-guided needle insertion procedures.


Subject(s)
Image Processing, Computer-Assisted , Needles , Animals , Phantoms, Imaging , Ultrasonography , Ultrasonography, Interventional
9.
Neurosci Lett ; 698: 113-120, 2019 04 17.
Article in English | MEDLINE | ID: mdl-30630057

ABSTRACT

Decoding the movements of different fingers within the same hand can increase the control's dimensions of the electroencephalography (EEG)-based brain-computer interface (BCI) systems. This in turn enables the subjects who are using assistive devices to better perform various dexterous tasks. However, decoding the movements performed by different fingers within the same hand by analyzing the EEG signals is considered a challenging task. In this paper, we present a new EEG-based BCI system for decoding the movements of each finger within the same hand based on analyzing the EEG signals using a quadratic time-frequency distribution (QTFD), namely the Choi-William distribution (CWD). In particular, the CWD is employed to characterize the time-varying spectral components of the EEG signals and extract features that can capture movement-related information encapsulated within the EEG signals. The extracted CWD-based features are used to build a two-layer classification framework that decodes finger movements within the same hand. The performance of the proposed system is evaluated by recording the EEG signals for eighteen healthy subjects while performing twelve finger movements using their right hands. The results demonstrate the efficacy of the proposed system to decode finger movements within the same hand of each subject.


Subject(s)
Electroencephalography , Fingers/physiology , Hand/physiology , Movement/physiology , Adult , Algorithms , Brain-Computer Interfaces , Electroencephalography/methods , Female , Humans , Imagination/physiology , Male , Young Adult
10.
Sensors (Basel) ; 18(10)2018 Oct 16.
Article in English | MEDLINE | ID: mdl-30332743

ABSTRACT

Curvilinear ultrasound transducers are commonly used in various needle insertion interventions, but localizing the needle in curvilinear ultrasound images is usually challenging. In this paper, a new method is proposed to localize the needle in curvilinear ultrasound images by exciting the needle using a piezoelectric buzzer and imaging the excited needle using a curvilinear ultrasound transducer to acquire a power Doppler image and a B-mode image. The needle-induced Doppler responses that appear in the power Doppler image are analyzed to estimate the needle axis initially and identify the candidate regions that are expected to include the needle. The candidate needle regions in the B-mode image are analyzed to improve the localization of the needle axis. The needle tip is determined by analyzing the intensity variations of the power Doppler and B-mode images around the needle axis. The proposed method is employed to localize different needles that are inserted in three ex vivo animal tissue types at various insertion angles, and the results demonstrate the capability of the method to achieve automatic, reliable and accurate needle localization. Furthermore, the proposed method outperformed two existing needle localization methods.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Ultrasonography, Doppler/methods , Animals , Cattle , Equipment Design , Feasibility Studies , Liver/diagnostic imaging , Muscle, Skeletal/diagnostic imaging , Needles , Ultrasonography, Doppler/instrumentation
11.
Med Image Anal ; 50: 145-166, 2018 12.
Article in English | MEDLINE | ID: mdl-30336383

ABSTRACT

Three-dimensional (3D) motorized curvilinear ultrasound probes provide an effective, low-cost tool to guide needle interventions, but localizing and tracking the needle in 3D ultrasound volumes is often challenging. In this study, a new method is introduced to localize and track the needle using 3D motorized curvilinear ultrasound probes. In particular, a low-cost camera mounted on the probe is employed to estimate the needle axis. The camera-estimated axis is used to identify a volume of interest (VOI) in the ultrasound volume that enables high needle visibility. This VOI is analyzed using local phase analysis and the random sample consensus algorithm to refine the camera-estimated needle axis. The needle tip is determined by searching the localized needle axis using a probabilistic approach. Dynamic needle tracking in a sequence of 3D ultrasound volumes is enabled by iteratively applying a Kalman filter to estimate the VOI that includes the needle in the successive ultrasound volume and limiting the localization analysis to this VOI. A series of ex vivo animal experiments are conducted to evaluate the accuracy of needle localization and tracking. The results show that the proposed method can localize the needle in individual ultrasound volumes with maximum error rates of 0.7 mm for the needle axis, 1.7° for the needle angle, and 1.2 mm for the needle tip. Moreover, the proposed method can track the needle in a sequence of ultrasound volumes with maximum error rates of 1.0 mm for the needle axis, 2.0° for the needle angle, and 1.7 mm for the needle tip. These results suggest the feasibility of applying the proposed method to localize and track the needle using 3D motorized curvilinear ultrasound probes.


Subject(s)
Imaging, Three-Dimensional , Ultrasonography/methods , Imaging, Three-Dimensional/instrumentation , Imaging, Three-Dimensional/methods , Needles
12.
Sensors (Basel) ; 18(8)2018 Aug 20.
Article in English | MEDLINE | ID: mdl-30127311

ABSTRACT

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73.8 % ⁻ 86.2 % . Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.


Subject(s)
Brain/physiology , Electroencephalography , Emotions , Support Vector Machine , Female , Humans , Male
13.
Article in English | MEDLINE | ID: mdl-29505407

ABSTRACT

Temporal-enhanced ultrasound (TeUS) is a novel noninvasive imaging paradigm that captures information from a temporal sequence of backscattered US radio frequency data obtained from a fixed tissue location. This technology has been shown to be effective for classification of various in vivo and ex vivo tissue types including prostate cancer from benign tissue. Our previous studies have indicated two primary phenomena that influence TeUS: 1) changes in tissue temperature due to acoustic absorption and 2) micro vibrations of tissue due to physiological vibration. In this paper, first, a theoretical formulation for TeUS is presented. Next, a series of simulations are carried out to investigate micro vibration as a source of tissue characterizing information in TeUS. The simulations include finite element modeling of micro vibration in synthetic phantoms, followed by US image generation during TeUS imaging. The simulations are performed on two media, a sparse array of scatterers and a medium with pathology mimicking scatterers that match nuclei distribution extracted from a prostate digital pathology data set. Statistical analysis of the simulated TeUS data shows its ability to accurately classify tissue types. Our experiments suggest that TeUS can capture the microstructural differences, including scatterer density, in tissues as they react to micro vibrations.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Ultrasonography/methods , Computer Simulation , Databases, Factual , Finite Element Analysis , Humans , Male , Phantoms, Imaging , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging
14.
Sensors (Basel) ; 17(9)2017 Aug 23.
Article in English | MEDLINE | ID: mdl-28832513

ABSTRACT

This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88 . 8 % and 90 . 2 % , respectively, for the subject-dependent training procedure, and 80 . 8 % and 87 . 8 % , respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Hand , Humans , Imagination , User-Computer Interface
15.
Comput Math Methods Med ; 2016: 6740956, 2016.
Article in English | MEDLINE | ID: mdl-28127383

ABSTRACT

Ultrasound imaging is commonly used for breast cancer diagnosis, but accurate interpretation of breast ultrasound (BUS) images is often challenging and operator-dependent. Computer-aided diagnosis (CAD) systems can be employed to provide the radiologists with a second opinion to improve the diagnosis accuracy. In this study, a new CAD system is developed to enable accurate BUS image classification. In particular, an improved texture analysis is introduced, in which the tumor is divided into a set of nonoverlapping regions of interest (ROIs). Each ROI is analyzed using gray-level cooccurrence matrix features and a support vector machine classifier to estimate its tumor class indicator. The tumor class indicators of all ROIs are combined using a voting mechanism to estimate the tumor class. In addition, morphological analysis is employed to classify the tumor. A probabilistic approach is used to fuse the classification results of the multiple-ROI texture analysis and morphological analysis. The proposed approach is applied to classify 110 BUS images that include 64 benign and 46 malignant tumors. The accuracy, specificity, and sensitivity obtained using the proposed approach are 98.2%, 98.4%, and 97.8%, respectively. These results demonstrate that the proposed approach can effectively be used to differentiate benign and malignant tumors.


Subject(s)
Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Ultrasonography, Mammary , Area Under Curve , Diagnosis, Computer-Assisted , False Positive Reactions , Female , Humans , Models, Statistical , Normal Distribution , Pattern Recognition, Automated/methods , Probability , Reproducibility of Results , Sensitivity and Specificity , Software
16.
Med Phys ; 42(11): 6221-33, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26520715

ABSTRACT

PURPOSE: Ultrasound imaging provides a low-cost, real-time modality to guide needle insertion procedures, but localizing the needle using conventional ultrasound images is often challenging. Estimating the needle trajectory can increase the success rate of ultrasound-guided needle interventions and improve patient comfort. In this study, a novel method is introduced to localize the needle trajectory in curvilinear ultrasound images based on the needle reflection pattern of circular ultrasound waves. METHODS: A circular ultrasound wave was synthesized by sequentially firing the elements of a curvilinear transducer and recording the radio-frequency signals received by each element. Two features, namely, the large amplitude and repetitive reflection pattern, were used to identify the needle echoes in the received signals. The trajectory of the needle was estimated by fitting the arrival times of needle echoes to an equation that describes needle reflection of circular waves. The method was employed to estimate the trajectories of needles inserted in agar phantom, beef muscle, and porcine tissue specimens. RESULTS: The maximum error rates of estimating the needle trajectories were on the order of 1 mm and 3° for the radial and azimuth coordinates, respectively. CONCLUSIONS: These results suggest that the proposed method can improve the robustness and accuracy of needle segmentation methods by adding signature-based detection of the needle trajectory in curvilinear ultrasound images. The method can be implemented on conventional ultrasound imaging systems.


Subject(s)
Endoscopic Ultrasound-Guided Fine Needle Aspiration/methods , Image Interpretation, Computer-Assisted/methods , Needles , Pattern Recognition, Automated/methods , Surgery, Computer-Assisted/methods , Ultrasonography, Interventional/methods , Algorithms , Endoscopic Ultrasound-Guided Fine Needle Aspiration/instrumentation , Humans , Image Enhancement/methods , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity , Surgery, Computer-Assisted/instrumentation , Ultrasonography, Interventional/instrumentation
17.
IEEE Trans Biomed Eng ; 60(2): 310-20, 2013 Feb.
Article in English | MEDLINE | ID: mdl-23144023

ABSTRACT

Ultrasound (US) radio-frequency (RF) time series is an effective tissue classification method that enables accurate cancer diagnosis, but the mechanisms underlying this method are not completely understood. This paper presents a model to describe the variations in tissue temperature and sound speed that take place during the RF time series scanning procedures and relate these variations to US backscattering. The model was used to derive four novel characterization features. These features were used to classify three animal tissues, and they obtained accuracies as high as 88.01%. The performance of the proposed features was compared with RF time series features proposed in a previous study. The results indicated that the US-induced variations in tissue temperature and sound speed, which were used to derive the proposed features, were important contributors to the tissue typing capabilities of the RF time series. Simulations carried out to estimate the heating induced during the scanning procedure employed in this study showed temperature rises lower than 2 °C. The model and results presented in this paper can be used to improve the RF time series.


Subject(s)
Models, Biological , Signal Processing, Computer-Assisted , Ultrasonography/methods , Animals , Cattle , Chickens , Image Processing, Computer-Assisted , Liver/diagnostic imaging , Muscles/diagnostic imaging , Phantoms, Imaging , Radio Waves , Support Vector Machine , Temperature
18.
Article in English | MEDLINE | ID: mdl-21156359

ABSTRACT

Two methods for simulation of ultrasound wavefront distortion are introduced and compared with aberration produced in simulations using digitized breast tissue specimens and a conventional multiple time-shift screen model. In the first method, aberrators are generated using a computational model of breast anatomy. In the second method, 10 to 12 irregularly shaped, strongly scattering inclusions are superimposed on the multiple-screen model to create a screen-inclusion model. Linear 2-D propagation of a 7.5-MHz planar, pulsed wavefront through each aberrator is computed using a first-order k-space method. The anatomical and screen-inclusion models reproduce two characteristics of arrival-time fluctuations observed in simulations using the digitized specimens that are not represented in simulations using the multiple-screen model: non-Gaussian first-order statistics and sharp changes in the rms arrival-time fluctuation as a function of propagation distance. The anatomical and screen-inclusion models both produce energy- level fluctuations similar to the digitized specimens, but the anatomical model more closely matches the pulse-shape distortion produced by the specimens. Both aberration models can readily be extended to 3-D, and the screen-inclusion model has the advantage of simplicity of implementation. Both models should enable more rigorous evaluation of adaptive focusing algorithms than is possible using conventional time-shift screen models.


Subject(s)
Breast/anatomy & histology , Image Processing, Computer-Assisted/methods , Models, Biological , Ultrasonography, Mammary/methods , Algorithms , Analysis of Variance , Computer Simulation , Female , Humans , Models, Anatomic , Statistics, Nonparametric
19.
J Acoust Soc Am ; 126(3): 1231-44, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19739736

ABSTRACT

A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.


Subject(s)
Acoustics , Models, Theoretical , Absorption , Algorithms , Computer Simulation , Fourier Analysis , Humans , Models, Biological , Pressure , Signal Processing, Computer-Assisted , Time Factors
20.
IEEE Trans Biomed Eng ; 56(12): 2806-15, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19695990

ABSTRACT

High-frequency (20-60 MHz) ultrasound images of preclinical tumor models are sensitive to changes in tissue microstructure that accompany tumor progression and treatment responses, but the relationships between tumor microanatomy and high-frequency ultrasound backscattering are incompletely understood. This paper introduces a 3-D microanatomical model in which tissue is treated as a population of stochastically positioned spherical cells consisting of a spherical nucleus surrounded by homogeneous cytoplasm. The model is used to represent the microstructure of both healthy mouse liver and an experimental liver metastasis that are analyzed using 4 ',6-diamidino-2-phenylindole- and hematoxylin and eosin-stained histology specimens digitized at 20 x magnification. The spatial organization of cells is controlled in the model by a Gibbs-Markov point process whose parameters are tuned to maximize the similarity of experimental and simulated tissue microstructure, which is characterized using three descriptors of nuclear spatial arrangement adopted from materials science. The model can accurately reproduce the microstructure of the relatively homogeneous healthy liver and the average cell clustering observed in the experimental metastasis, but is less effective at reproducing the spatial heterogeneity of the experimental metastasis. The model provides a framework for computational investigations of the effects of individual microstructural and acoustic properties on high-frequency backscattering.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/secondary , Liver/diagnostic imaging , Ultrasonography/methods , Animals , Cell Line, Tumor , Computer Simulation , Image Enhancement/methods , Mice , Mice, Inbred C57BL , Models, Biological , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity , Stochastic Processes
SELECTION OF CITATIONS
SEARCH DETAIL
...