RESUMEN
Underwater mobile acoustic source target localization encounters several challenges, including the unknown propagation speed of the source signal, uncertainty in the observation platform's position and velocity (i.e., platform systematic errors), and economic costs. This paper proposes a new two-step closed-form localization algorithm that jointly estimates the angle of arrival (AOA), time difference of arrival (TDOA), and frequency difference of arrival (FDOA) to address these challenges. The algorithm initially introduces auxiliary variables to construct pseudo-linear equations to obtain the initial solution. It then exploits the relationship between the unknown and auxiliary variables to derive the exact solution comprising solely the unknown variables. Both theoretical analyses and simulation experiments demonstrate that the proposed method accurately estimates the position, velocity, and speed of the sound source even with an unknown sound speed and platform systematic errors. It achieves asymptotic optimality within a reasonable error range to approach the Cramér-Rao lower bound (CRLB). Furthermore, the algorithm exhibits low complexity, reduces the number of required localization platforms, and decreases the economic costs. Additionally, the simulation experiments validate the effectiveness of the proposed localization method across various scenarios, outperforming other comparative algorithms.
RESUMEN
Angle-of-arrival (AOA) measurements are often used in underwater acoustical localization. Different from the traditional AOA model based on azimuth and elevation measurements, the AOA model studied in this paper uses bearing measurements. It is also often used in the Ultra-Short Baseline system (USBL). However, traditional acoustical localization needs additional range information. If the range information is unavailable, the closed-form solution is difficult to obtain only with bearing measurements. Thus, a localization closed-form solution using only bearing measurements is explored in this article. A pseudo-linear measurement model between the source position and the bearing measurements is derived, and considering the nonlinear relationship of the parameters, a weighted least-squares optimization equation based on multiple constraints is established. Different from the traditional two-step least-squares method, the semidefinite programming (SDP) method is designed to obtain the initial solution, and then a bias compensation method is proposed to further minimize localization errors based on the SDP result. Numerical simulations show that the performance of the proposed method can achieve Cramer-Rao lower bound (CRLB) accuracy. The field test also proves that the proposed method can locate the source position without range measurements and obtain the highest positioning accuracy.
RESUMEN
Diffusion MRI (dMRI) allows for non-invasive investigation of brain tissue microstructure. By fitting a model to the dMRI signal, various quantitative measures can be derived from the data, such as fractional anisotropy, neurite density and axonal radii maps. We investigate the Fisher Information Matrix (FIM) and uncertainty propagation as a generally applicable method for quantifying the parameter uncertainties in linear and non-linear diffusion MRI models. In direct comparison with Markov Chain Monte Carlo (MCMC) sampling, the FIM produces similar uncertainty estimates at much lower computational cost. Using acquired and simulated data, we then list several characteristics that influence the parameter variances, including data complexity and signal-to-noise ratio. For practical purposes we investigate a possible use of uncertainty estimates in decreasing intra-group variance in group statistics by uncertainty-weighted group estimates. This has potential use cases for detection and suppression of imaging artifacts.
Asunto(s)
Imagen de Difusión por Resonancia Magnética , Neuritas , Humanos , Incertidumbre , Imagen de Difusión por Resonancia Magnética/métodos , Cadenas de Markov , AxonesRESUMEN
While B0 shimming is an important requirement for in vivo brain spectroscopy, for single voxel spectroscopy (SVS), the role for advanced shim methods has been questioned. Specifically, with the small spatial dimensions of the voxel, the extent to which inhomogeneities higher than second order exist and the ability of higher order shims to correct them is controversial. To assess this, we acquired SVS from two loci of neurophysiological interest, the rostral prefrontal cortex (rPFC; 8 cc) and hippocampus (Hc; 9 cc). The rPFC voxel was placed using SUsceptibility Managed Optimization (SUMO) and an initial B0 map that covers the entire cerebrum to cerebellum. In each location, we compared map-based shimming (Bolero) with projection-based shimming (FAST(EST)MAP). We also compared vendor-provided spherical harmonic first- and second-order shims with additional third- and fourth-order shim hardware. The 7T SVS acquisition used stimulated echo acquisition mode (STEAM) TR/TM/TE of 6 s/20 ms/8 ms, a tissue water acquisition for concentration reference, and LCModel for spectral analysis. In the rPFC (n = 7 subjects), Bolero shimming with first- and second-order shims reduced the residual inhomogeneity σ B 0 from 9.8 ± 4.5 Hz with FAST(EST)MAP to 6.5 ± 2.0 Hz. The addition of third- and fourth-order shims further reduced σ B 0 to 4.0 ± 0.8 Hz. In the Hc (n = 7 subjects), FAST(EST)MAP, Bolero with first- and second-order shims, and Bolero with first- to fourth-order shims achieved σ B 0 values of 8.6 ± 1.9, 5.6 ± 1.0, and 4.6 ± 0.9 Hz, respectively. The spectral linewidth, Δ v σ B 0 , was estimated with a Voigt lineshape using σ B 0 and T2 = 130 ms. Δ v σ B 0 significantly correlated with the Cramer-Rao lower bounds and concentrations of several metabolites, including glutamate and glutamine in the rPFC. In both loci, if the B0 distribution is well described by a Gaussian model, the variance of the metabolite concentrations is reduced, consistent with the LCModel fit based on a unimodal lineshape. Overall, the use of the high order and map-based B0 shim methods improved the accuracy and consistency of spectroscopic data.
Asunto(s)
Encéfalo , Cabeza , Humanos , Encéfalo/diagnóstico por imagen , Espectroscopía de Resonancia Magnética/métodos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
The accuracy of radio-based positioning is heavily influenced by a dense multipath (DM) channel, leading to poor position accuracy. The DM affects both time of flight (ToF) measurements extracted from wideband (WB) signals-specifically, if the bandwidth is below 100 MHz-as well as received signal strength (RSS) measurements, due to the interference of multipath signal components onto the information-bearing line-of-sight (LoS) component. This work proposes an approach for combining these two different measurement technologies, leading to a robust position estimation in the presence of DM. We assume that a large ensemble of densely-spaced devices is to be positioned. We use RSS measurements to determine "clusters" of devices in the vicinity of each other. Joint processing of the WB measurements from all devices in a cluster efficiently suppresses the influence of the DM. We formulate an algorithmic approach for the information fusion of the two technologies and derive the corresponding Cramér-Rao lower bound (CRLB) to gain insight into the performance trade-offs at hand. We evaluate our results by simulations and validate the approach with real-world measurement data. The results show that the clustering approach can halve the root-mean-square error (RMSE) from about 2 m to below 1 m, using WB signal transmissions in the 2.4 GHz ISM band at a bandwidth of about 80 MHz.
Asunto(s)
Tecnología , Extremidad Superior , Análisis por ConglomeradosRESUMEN
Hierarchical Temporal Memory (HTM) is an unsupervised algorithm in machine learning. It models several fundamental neocortical computational principles. Spatial Pooler (SP) is one of the main components of the HTM, which continuously encodes streams of binary input from various layers and regions into sparse distributed representations. In this paper, the goal is to evaluate the sparsification in the SP algorithm from the perspective of information theory by the information bottleneck (IB), Cramer-Rao lower bound, and Fisher information matrix. This paper makes two main contributions. First, we introduce a new upper bound for the standard information bottleneck relation, which we refer to as modified-IB in this paper. This measure is used to evaluate the performance of the SP algorithm in different sparsity levels and various amounts of noise. The MNIST, Fashion-MNIST and NYC-Taxi datasets were fed to the SP algorithm separately. The SP algorithm with learning was found to be resistant to noise. Adding up to 40% noise to the input resulted in no discernible change in the output. Using the probabilistic mapping method and Hidden Markov Model, the sparse SP output representation was reconstructed in the input space. In the modified-IB relation, it is numerically calculated that a lower noise level and a higher sparsity level in the SP algorithm lead to a more effective reconstruction and SP with 2% sparsity produces the best results. Our second contribution is to prove mathematically that more sparsity leads to better performance of the SP algorithm. The data distribution was considered the Cauchy distribution, and the Cramer-Rao lower bound was analyzed to estimate SP's output at different sparsity levels.
RESUMEN
Positioning systems are used in a wide range of applications which require determining the position of an object in space, such as locating and tracking assets, people and goods; assisting navigation systems; and mapping. Indoor Positioning Systems (IPSs) are used where satellite and other outdoor positioning technologies lack precision or fail. Ultra-WideBand (UWB) technology is especially suitable for an IPS, as it operates under high data transfer rates over short distances and at low power densities, although signals tend to be disrupted by various objects. This paper presents a comprehensive study of the precision, failure, and accuracy of 2D IPSs based on UWB technology and a pseudo-range multilateration algorithm using Time Difference of Arrival (TDoA) signals. As a case study, the positioning of a 4×4m2 area, four anchors (transceivers), and one tag (receiver) are considered using bitcraze's Loco Positioning System. A Cramér-Rao Lower Bound analysis identifies the convex hull of the anchors as the region with highest precision, taking into account the anisotropic radiation pattern of the anchors' antennas as opposed to ideal signal distributions, while bifurcation envelopes containing the anchors are defined to bound the regions in which the IPS is predicted to fail. This allows the formulation of a so-called flyable area, defined as the intersection between the convex hull and the region outside the bifurcation envelopes. Finally, the static bias is measured after applying a built-in Extended Kalman Filter (EKF) and mapped using a Radial Basis Function Network (RBFN). A debiasing filter is then developed to improve the accuracy. Findings and developments are experimentally validated, with the IPS observed to fail near the anchors, precision around ±3cm, and accuracy improved by about 15cm for static and 5cm for dynamic measurements, on average.
RESUMEN
Traditional direction-finding systems are based on processing the outputs of multiple spatially separated antennas. The impinging signal Angle-of-Arrival (AOA) is estimated using the relative phase and amplitude of the multiple outputs that are sampled simultaneously. Here, we explore the potential of a single moving antenna to provide useful direction finding of a single transmitter. If the transmitted signal frequency is steady enough during the collection of data, a single antenna can be moved while tracking the phase changes to provide an Angle-of-Arrival measurement. The advantages of a single-antenna sensor include the sensor size, the lack of a need for multiple-receiver synchronization in time and frequency, the lack of mutual antenna coupling, and the cost of the system. However, a single-antenna sensor requires an accurate knowledge of its position during the data collection and it is challenged by transmitter phase instability, signal modulation, and transmitter movement during the measurement integration time. We analyze the performance of the proposed sensor, support the analysis with simulations and finally, present measurements performed by hardware configured to check the validity of the proposed single-antenna sensor.
RESUMEN
The localization of sensors in wireless sensor networks has recently gained considerable attention. The existing location methods are based on a one-spot measurement model. It is difficult to further improve the positioning accuracy of existing location methods based on single-spot measurements. This paper proposes two location methods based on multi-spot measurements to reduce location errors. Because the multi-spot measurements model has more measurement equations than the single-spot measurements model, the proposed methods provide better performance than the traditional location methods using one-spot measurement in terms of the root mean square error (RMSE) and Cramer-Rao lower bound (CRLB). Both closed-form and iterative algorithms are proposed in this paper. The former performs suboptimally with less computational burden, whereas the latter has the highest positioning accuracy in attaining the CRLB. Moreover, a novel CRLB for the proposed multi-spot measurements model is also derived in this paper. A theoretical proof shows that the traditional CRLB in the case of single-spot measurements performs worse than the proposed CRLB in the case of multi-spot measurements. The simulation results show that the proposed methods have a lower RMSE than the traditional location methods.
Asunto(s)
Algoritmos , Simulación por ComputadorRESUMEN
Indoor signals are susceptible to NLOS propagation effects, multipath effects, and a dynamic environment, posing more challenges than outdoor signals despite decades of advancements in location services. In modern Wi-Fi networks that support both MIMO and OFDM techniques, Channel State Information (CSI) is now used as an enhanced wireless channel metric replacing the Wi-Fi received signal strength (RSS) fingerprinting method. The indoor multipath effects, however, make it less robust and stable. This study proposes a positive knowledge transfer-based heterogeneous data fusion method for representing the different scenarios of temporal variations in CSI-based fingerprint measurements generated in a complex indoor environment targeting indoor parking lots, while reducing the training calibration overhead. Extensive experiments were performed with real-world scenarios of the indoor parking phenomenon. Results revealed that the proposed algorithm proved to be an efficient algorithm with consistent positioning accuracy across all potential variations. In addition to improving indoor parking location accuracy, the proposed algorithm provides computationally robust and efficient location estimates in dynamic environments. A Cramer-Rao lower bound (CRLB) analysis was also used to estimate the lower bound of the parking lot location error variance under various temporal variation scenarios. Based on analytical derivations, we prove that the lower bound of the variance of the location estimator depends on the (i) angle of the base stations, (ii) number of base stations, (iii) distance between the target and the base station, djr (iv) correlation of the measurements, ρrjai and (v) signal propagation parameters σC and γ.
Asunto(s)
Algoritmos , CalibraciónRESUMEN
This paper presents the application of heterogeneous transfer learning (HetTL) methods which consider hybrid feature selection to reduce the training calibration effort and the noise generated by fingerprint duplicates obtained from multiple Wi-Fi access points. The Cramer-Rao Lower Bound analysis (CRLB) was also applied to evaluate and estimate a lower limit for the variance of a parameter estimator used to analyze positioning performance. We developed two novel algorithms for feature selection in fingerprint-based indoor positioning problems (IPP) to enhance positioning performance in the target domain with the HetTL. The algorithms comprised two scenarios: (i) a principal component analysis-based approach (PCA-based) and (ii) a hybrid approach that takes both PCA and correlation effect analysis into account (hybrid scenario). Accordingly, a new feature vector was constructed by retaining only the most significant predictors, and the most efficient feature dimensions were also determined by using a hybrid-based approach. Experimental results showed that the hybrid-based proposed algorithm has the minimum mean absolute error. The CRLB analysis also showed that the number of Wi-Fi access points could affect the lower bound location estimation error; however, identifying the most significant predictors is an effective approach to improve positioning performance.
RESUMEN
In this paper, we derive the Cramér-Rao lower bounds (CRLB) for direction of arrival (DoA) estimation by using sparse Bayesian learning (SBL) and the Laplace prior. CRLB is a lower bound on the variance of the estimator, the change of CRLB can indicate the effect of the specific factor to the DoA estimator, and in this paper a Laplace prior and the three-stage framework are used for the DoA estimation. We derive the CRLBs under different scenarios: (i) if the unknown parameters consist of deterministic and random variables, a hybrid CRLB is derived; (ii) if all the unknown parameters are random, a Bayesian CRLB is derived, and the marginalized Bayesian CRLB is obtained by marginalizing out the nuisance parameter. We also derive the CRLBs of the hyperparameters involved in the three-stage model and explore the effect of multiple snapshots to the CRLBs. We compare the derived CRLBs of SBL, finding that the marginalized Bayesian CRLB is tighter than other CRLBs when SNR is low and the differences between CRLBs become smaller when SNR is high. We also study the relationship between the mean squared error of the source magnitudes and the CRLBs, including numerical simulation results with a variety of antenna configurations such as different numbers of receivers and different noise conditions.
RESUMEN
Sensor placement is an important factor that may significantly affect the localization performance of a sensor network. This paper investigates the sensor placement optimization problem in three-dimensional (3D) space for angle of arrival (AOA) target localization with Gaussian priors. We first show that under the A-optimality criterion, the optimization problem can be transferred to be a diagonalizing process on the AOA-based Fisher information matrix (FIM). Secondly, we prove that the FIM follows the invariance property of the 3D rotation, and the Gaussian covariance matrix of the FIM can be diagonalized via 3D rotation. Based on this finding, an optimal sensor placement method using 3D rotation was created for when prior information exists as to the target location. Finally, several simulations were carried out to demonstrate the effectiveness of the proposed method. Compared with the existing methods, the mean squared error (MSE) of the maximum a posteriori (MAP) estimation using the proposed method is lower by at least 25% when the number of sensors is between 3 and 6, while the estimation bias remains very close to zero (smaller than 0.15 m).
RESUMEN
Target localization plays a vital role in ocean sensor networks (OSNs), in which accurate position information is not only a critical need of ocean observation but a necessary condition for the implementation of ocean engineering. Compared with other range-based localization technologies in OSNs, the received signal strength (RSS)-based localization technique has attracted widespread attention due to its low cost and synchronization-free nature. However, maintaining relatively good accuracy in an environment as dynamic and complex as the ocean remains challenging. One of the most damaging factors that degrade the localization accuracy is the uncertainty in transmission power. Besides the equipment loss, the uncertain factors in the fickle ocean environment may result in a significant deviation between the standard rated transmission power and the usable transmission power. The difference between the rated and actual transmission power would lead to an extra error when it comes to the localization in OSNs. In this case, a method that can locate the target without needing prior knowledge of the transmission power is proposed. The method relies on a two-phase procedure in which the location information and the transmission power are jointly estimated. First, the original nonconvex localization problem is transformed into an alternating non-negativity-constrained least square framework with the unknown transmission power (UT-ANLS). Under this framework, a two-stage optimization method based on interior point method (IPM) and majorization-minimization tactic (MMT) is proposed to search for the optimal solution. In the first stage, the barrier function method is used to limit the optimization scope to find an approximate solution to the problem. However, it is infeasible to approach the constraint boundary due to its intrinsic error. Then, in the second stage, the original objective is converted into a surrogate function consisting of a convex quadratic and concave term. The solution obtained by IPM is considered the initial guess of MMT to jointly estimate both the location and transmission power in the iteration. In addition, in order to evaluate the performance of IPM-MM, the Cramer Rao lower bound (CRLB) is derived. Numerical simulation results demonstrate that IPM-MM achieves better performance than the others in different scenarios.
RESUMEN
The future of transportation systems is going towards autonomous and assisted driving, aiming to reach full automation. There is huge focus on communication technologies expected to offer vehicular application services, of which most are location-based services. This paper provides a study on localization accuracy limits using vehicle-to-infrastructure communication channels provided by IEEE 802.11p and LTE-V, considering two different vehicular network designs. Real data measurements obtained on our highway testbed are used to model and simulate propagation channels, the position of base stations, and the route followed by the vehicle. Cramer-Rao lower bound, geometric dilution of precision, and least square error for time difference of arrival localization technique are investigated. Based on our analyses and findings, LTE-V outperforms IEEE 802.11p. However, it is apparent that providing larger signal bandwidth dedicated to localization, with network sites positioned at both sides of the highway, and considering the geometry between vehicle and network sites, improve vehicle localization accuracy.
RESUMEN
The imaging performance of clinical positron emission tomography (PET) systems has evolved impressively during the last â¼15 years. A main driver of these improvements has been the introduction of time-of-flight (TOF) detectors with high spatial resolution and detection efficiency, initially based on photomultiplier tubes, later silicon photomultipliers. This review aims to offer insight into the challenges encountered, solutions developed, and lessons learned during this period. Detectors based on fast, bright, inorganic scintillators form the scope of this work, as these are used in essentially all clinical TOF-PET systems today. The improvement of the coincidence resolving time (CRT) requires the optimization of the entire detection chain and a sound understanding of the physics involved facilitates this effort greatly. Therefore, the theory of scintillation detector timing is reviewed first. Once the fundamentals have been set forth, the principal detector components are discussed: the scintillator and the photosensor. The parameters that influence the CRT are examined and the history, state-of-the-art, and ongoing developments are reviewed. Finally, the interplay between these components and the optimization of the overall detector design are considered. Based on the knowledge gained to date, it appears feasible to improve the CRT from the values of 200-400 ps achieved by current state-of-the-art TOF-PET systems to about 100 ps or less, even though this may require the implementation of advanced methods such as time resolution recovery. At the same time, it appears unlikely that a system-level CRT in the order of â¼10 ps can be reached with conventional scintillation detectors. Such a CRT could eliminate the need for conventional tomographic image reconstruction and a search for new approaches to timestamp annihilation photons with ultra-high precision is therefore warranted. While the focus of this review is on timing performance, it attempts to approach the topic from a clinically driven perspective, i.e. bearing in mind that the ultimate goal is to optimize the value of PET in research and (personalized) medicine.
Asunto(s)
Tomografía de Emisión de Positrones , Fotones , Física , Conteo por Cintilación , TecnologíaRESUMEN
For the first time, we propose using amorphous selenium (a-Se) as the photoconductive material for time-of-flight (TOF) detectors. Advantages of avalanche-modea-Se are having high fill factor, low excess noise due to unipolar photoconductive gain, band transport in extended states with the highest possible mobility, and negligible trapping. The major drawback ofa-Se is its poor single-photon time resolution and low carrier mobility due to shallow-traps, problems that must be circumvented for TOF applications. We propose a nanopattern multi-wella-Se detector (MWSD) to enable both impact ionization avalanche gain and unipolar time-differential (UTD) charge sensing in one device. Our experimental results show that UTD charge sensing in avalanche-modea-Se improves time-resolution by nearly 4 orders-of-magnitude. In addition, we used Cramér-Rao lower bound analysis and Monte Carlo simulations to demonstrate the viability of our MWSD for low statistics photon imaging modalities such as PET despite it being a linear-mode device. Based on our results, our device may achieve 100 ps coincidence time resolution in TOF PET with a material that is low cost and uniformly scalable to large area.
Asunto(s)
Selenio , Método de Montecarlo , Tomografía de Emisión de PositronesRESUMEN
Fluorescence-lifetime single molecule localization microscopy (FL-SMLM) adds the lifetime dimension to the spatial super-resolution provided by SMLM. Independent of intensity and spectrum, this lifetime information can be used, for example, to quantify the energy transfer efficiency in Förster Resonance Energy Transfer (FRET) imaging, to probe the local environment with dyes that change their lifetime in an environment-sensitive manner, or to achieve image multiplexing by using dyes with different lifetimes. We present a thorough theoretical analysis of fluorescence-lifetime determination in the context of FL-SMLM and compare different lifetime-fitting approaches. In particular, we investigate the impact of background and noise, and give clear guidelines for procedures that are optimized for FL-SMLM. We do also present and discuss our public-domain software package "Fluorescence-Lifetime TrackNTrace," which converts recorded fluorescence microscopy movies into super-resolved FL-SMLM images.
RESUMEN
We consider measures of nonlinearity (MoNs) of a polynomial curve in two-dimensions (2D), as previously studied in our Fusion 2010 and 2019 ICCAIS papers. Our previous work calculated curvature measures of nonlinearity (MoNs) using (i) extrinsic curvature, (ii) Bates and Watts parameter-effects curvature, and (iii) direct parameter-effects curvature. In this paper, we have introduced the computation and analysis of a number of new MoNs, including Beale's MoN, Linssen's MoN, Li's MoN, and the MoN of Straka, Duník, and SÌimandl. Our results show that all of the MoNs studied follow the same type of variation as a function of the independent variable and the power of the polynomial. Secondly, theoretical analysis and numerical results show that the logarithm of the mean square error (MSE) is an affine function of the logarithm of the MoN for each type of MoN. This implies that, when the MoN increases, the MSE increases. We have presented an up-to-date review of various MoNs in the context of non-linear parameter estimation and non-linear filtering. The MoNs studied here can be used to compute MoN in non-linear filtering problems.
RESUMEN
Proton magnetic resonance spectroscopy (1H-MRS) of the fetal brain can be used to study emerging metabolite profiles in the developing brain. Identifying early deviations in brain metabolic profiles in high-risk fetuses may offer important adjunct clinical information to improve surveillance and management during pregnancy. OBJECTIVE: To investigate the normative trajectory of the fetal brain metabolites during the second half of gestation, and to determine the impact of using different Cramer-Rao Lower Bounds (CRLB) threshold on metabolite measurements using magnetic resonance spectroscopy. STUDY DESIGN: We prospectively enrolled 219 pregnant women with normal fetal ultrasound and biometric measures. We performed a total of 331 fetal 1H-MRS studies with gestational age in the rage of 18-39 weeks with 112 of the enrolled participants scanned twice. All the spectra in this study were acquired on a GE 1.5 T scanner using long echo-time of 144 âms and analyzed in LCModel. RESULTS: We successfully acquired and analyzed fetal 1H-MRS with a success rate of 93%. We observed increases in total NAA, total creatine, total choline, scyllo inositol and total NAA-to-total choline ratio with advancing GA. Our results also showed faster increases in total NAA and total NAA-to-total choline ratio during the third trimester compared to the second trimester. We also observed faster increases in total choline and total NAA in female fetuses. Increasing the Cramer-Rao lower bounds threshold progressively from 100% to 40%-20% increased the mean metabolite concentrations and decreased the number of observations available for analysis. CONCLUSION: We report serial fetal brain biochemical profiles in a large cohort of health fetuses studied twice in gestation with a high success rate in the second and third trimester of pregnancy. We present normative in-vivo fetal brain metabolite trajectories over a 21-week gestational period which can be used to non-invasively measure and monitor brain biochemistry in the healthy and high-risk fetus.