The negatively charged nitrogen vacancy (NV(-)) center in diamond is the focus of widespread attention for applications ranging from quantum information processing to nanoscale metrology. Although most work so far has focused on the NV(-) optical and spin properties, control of the charge state promises complementary opportunities. One intriguing possibility is the long-term storage of information, a notion we hereby introduce using NV-rich, type 1b diamond. As a proof of principle, we use multicolor optical microscopy to read, write, and reset arbitrary data sets with two-dimensional (2D) binary bit density comparable to present digital-video-disk (DVD) technology. Leveraging on the singular dynamics of NV(-) ionization, we encode information on different planes of the diamond crystal with no cross-talk, hence extending the storage capacity to three dimensions. Furthermore, we correlate the center’s charge state and the nuclear spin polarization of the nitrogen host and show that the latter is robust to a cycle of NV(-) ionization and recharge. In combination with super-resolution microscopy techniques, these observations provide a route toward subdiffraction NV charge control, a regime where the storage capacity could exceed present technologies.
BACKGROUND: Shape of the dental root canal is highly patient specific. Automated identification methods of themedial line of dental root canals and the reproduction of their 3D shape can be beneficial forplanning endodontic interventions as severely curved root canals or multi-rooted teeth may posetreatment challenges. Accurate shape information of the root canals may also be used bymanufacturers of endodontic instruments in order to make more efficient clinical tools. METHOD: Novel image processing procedures dedicated to the automated detection of the medial axis of theroot canal from dental micro-CT and cone-beam CT records are developed. For micro-CT, the 3Dmodel of the root canal is built up from several hundred parallel cross sections, using imageenhancement, histogram based fuzzy c-means clustering, center point detection in the segmentedslice, three dimensional inner surface reconstruction, and potential field driven curve skeletonextraction in three dimensions. Cone-beam CT records are processed with image enhancement filtersand fuzzy chain based regional segmentation, followed by the reconstruction of the root canalsurface and detecting its skeleton via a mesh contraction algorithm. RESULTS: The proposed medial line identification and root canal detection algorithms are validated on clinicaldata sets. 25 micro-CT and 36 cone-beam-CT records are used in the validation procedure. Theoverall success rate of the automatic dental root canal identification was about 92% in bothprocedures. The algorithms proved to be accurate enough for endodontic therapy planning. CONCLUSIONS: Accurate medial line identification and shape detection algorithms of dental root canal have beendeveloped. Different procedures are defined for micro-CT and cone-beam CT records. Theautomated execution of the subsequent processing steps allows easy application of the algorithms inthe dental care. The output data of the image processing procedures is suitable for mathematicalmodeling of the central line. The proposed methods can help automate the preparation and design ofseveral kinds of endodontic interventions.
Because common complex diseases are affected by multiple genes and environmental factors, it is essential to investigate gene-gene and/or gene-environment interactions to understand genetic architecture of complex diseases. After the great success of large scale genome-wide association (GWA) studies using the high density single nucleotide polymorphism (SNP) chips, the study of gene-gene interaction becomes a next challenge. Multifactor dimensionality reduction (MDR) analysis has been widely used for the gene-gene interaction analysis. In practice, however, it is not easy to perform high order gene-gene interaction analyses via MDR in genome-wide level because it requires exploring a huge search space and suffers from a computational burden due to high dimensionality.
BACKGROUND: To evaluate institutional nursing care performance in the context of national comparative statistics (benchmarks), approximately one in every three major healthcare institutions (over 1,800 hospitals) across the United States, have joined the National Database for Nursing Quality Indicators[REGISTERED SIGN] (NDNQI[REGISTERED SIGN]). With over 18,000 hospital units contributing data for nearly 200 quantitative measures at present, a reliable and efficient input data screening for all quantitative measures for data quality control is critical to the integrity, validity, and on-time delivery of NDNQI reports. METHODS: With Monte Carlo simulation and quantitative NDNQI indicator examples, we compared two ad-hoc methods using robust scale estimators, Inter Quartile Range (IQR) and Median Absolute Deviation from the Median (MAD), to the classic, theoretically-based Minimum Covariance Determinant (FAST-MCD) approach, for initial univariate outlier detection. RESULTS: While the theoretically based FAST-MCD used in one dimension can be sensitive and is better suited for identifying groups of outliers because of its high breakdown point, the ad-hoc IQR and MAD approaches are fast, easy to implement, and could be more robust and efficient, depending on the distributional property of the underlying measure of interest. CONCLUSION: With highly skewed distributions for most NDNQI indicators within a short data screen window, the FAST-MCD approach, when used in one dimensional raw data setting, could overestimate the false alarm rates for potential outliers than the IQR and MAD with the same pre-set of critical value, thus, overburden data quality control at both the data entry and administrative ends in our setting.
BACKGROUND: Traditional habitat knowledge is an understudied part of traditional knowledge. Though the number of studies increased world-wide in the last decade, this knowledge is still rarely studied in Europe. We document the habitat vocabulary used by Csango people, and determine features they used to name and describe these categories.Study area and methods: Csango people live in Gyimes (Carpathians, Romania). The area is dominated by coniferous forests, hay meadows and pastures. Animal husbandry is the main source of living. Data on the knowledge of habitat preference of 135 salient wild plant species were collected (2908 records, 44 interviewees). Data collected indoors were counterchecked during outdoor interviews and participatory field work. RESULTS: Csangos used a rich and sophisticated vocabulary to name and describe habitat categories. They distinguished altogether at least 142–148 habitat types, and named them by 242 habitat terms. We argue that the method applied and the questions asked (‘what kind of place does species X like?’) helped the often implicit knowledge of habitats to be verbalized more efficiently than usual in an interview. Habitat names were highly lexicalized and most of them were widely shared. The main features were biotic or abiotic, like land-use, dominant plant species, vegetation structure, successional stage, disturbance, soil characteristics, hydrological, and geomorphological features. Csangos often used indicator species (28, mainly herbaceous taxa) in describing habitats of species. To prevent reduction in the quantity and/or quality of hay, unnecessary disturbance of grasslands was avoided by the Csangos. This could explain the high number of habitats (35) distinguished dominantly by the type and severity of disturbance. Based on the spatial scale and topological inclusiveness of habitat categories we distinguished macro-, meso-, and microhabitats. CONCLUSIONS: Csango habitat categories were not organized into a single hierarchy, and the partitioning was multidimensional. Multidimensional description of habitats, made the nuanced characterization of plant species' habitats possible by providing innumerable possibilities to combine the most salient habitat features. We conclude that multidimensionality of landscape partitioning and the number of dimensions applied in a landscape seem to depend on the number of key habitat gradients in the given landscape.
Echolocating bats construct an auditory world sequentially by analyzing successive pulse-echo pairs. Many other mammals rely upon a visual world, acquired by sequential foveal fixations connected by visual gaze saccades. We investigated the scanning behavior of bats and compared it to visual scanning. We assumed that each pulse-echo pair evaluation corresponds to a foveal fixation and that sonar beam movements between pulses can be seen as acoustic gaze saccades. We used a two-dimensional 16 microphone array to determine the sonar beam direction of succeeding pulses and to characterize the three dimensional scanning behavior in the common pipistrelle bat (Pipistrellus pipistrellus) flying in the field. We also used variations of signal amplitude of single microphone recordings as indicator for scanning behavior in open space. We analyzed 33 flight sequences containing more than 700 echolocation calls to determine bat positions, source levels, and beam aiming. When searching for prey and orienting in space, bats moved their sonar beam in all directions, often alternately back and forth. They also produced sequences with irregular or no scanning movements. When approaching the array, the scanning movements were much smaller and the beam was moved over the array in small steps. Differences in the scanning pattern at various recording sites indicated that the scanning behavior depended on the echolocation task that was being performed. The scanning angles varied over a wide range and were often larger than the maximum angle measurable by our array. We found that echolocating bats use a “saccade and fixate” strategy similar to vision. Through the use of scanning movements, bats are capable of finding and exploring targets in a wide search cone centered along flight direction.
In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization.
Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect.
Three-dimensional (3D) imaging mass spectrometry (MS) is an analytical chemistry technique for the 3D molecular analysis of a tissue specimen, entire organ, or microbial colonies on an agar plate. 3D-imaging MS has unique advantages over existing 3D imaging techniques, offers novel perspectives for understanding the spatial organization of biological processes, and has growing potential to be introduced into routine use in both biology and medicine. Owing to the sheer quantity of data generated, the visualization, analysis, and interpretation of 3D imaging MS data remain a significant challenge. Bioinformatics research in this field is hampered by the lack of publicly available benchmark datasets needed to evaluate and compare algorithms.
Given the hazardous nature of many materials and substances, ocular toxicity testing is required to evaluate the dangers associated with these substances after their exposure to the eye. Historically, animal tests such as the Draize test were exclusively used to determine the level of ocular toxicity by applying a test substance to a live rabbit’s eye and evaluating the biological response. In recent years, legislation in many developed countries has been introduced to try to reduce animal testing and promote alternative techniques. These techniques include ex vivo tests on deceased animal tissue, computational models that use algorithms to apply existing data to new chemicals and in vitro assays based on two dimensional (2D) and three dimensional (3D) cell culture models. Here we provide a comprehensive overview of the latest advances in ocular toxicity testing techniques, and discuss the regulatory framework used to evaluate their suitability.