SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: NTSC

214

Ultrafast video recording of spatiotemporal light distribution in a scattering medium has a significant impact in biomedicine. Although many simulation tools have been implemented to model light propagation in scattering media, existing experimental instruments still lack sufficient imaging speed to record transient light-scattering events in real time. We report single-shot ultrafast video recording of a light-induced photonic Mach cone propagating in an engineered scattering plate assembly. This dynamic light-scattering event was captured in a single camera exposure by lossless-encoding compressed ultrafast photography at 100 billion frames per second. Our experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation. This technology holds great promise for next-generation biomedical imaging instrumentation.

Concepts: Optics, Simulation, Monte Carlo, Monte Carlo method, Monte Carlo methods in finance, Camera, Video, NTSC

25

A feasibility study on a new technique capable of monitoring localised sweat rate is explored in this paper. Wearable devices commonly used in clinical practice for sweat sampling (i.e. Macroducts®) were positioned on the body of an athlete whose sweat rate was then monitored during cycling sessions. The position at which the sweat fills the Macroduct® was indicated by a contrasting marker and captured via a series of time-stamped photos or a video recording of the device during an exercise period. Given that the time of each captured image/frame is known (either through time stamp on photos or the constant frame rate of the video capture), it was therefore possible to estimate the sweat flow rate through a simple calibration model. The importance of gathering such valuable information is described, together with the results from a number of exercise trials to investigate the viability of this approach.

Concepts: Time, Rates, Device, Internet Explorer, Video, Frame rate, NTSC, Telecine

0

The scanning speed of atomic force microscopes continues to advance with some current commercial microscopes achieving on the order of one frame per second and at least one reaching 10 frames per second. Despite the success of these instruments, even higher frame rates are needed with scan ranges larger than are currently achievable. Moreover, there is a significant installed base of slower instruments that would benefit from algorithmic approaches to increasing their frame rate without requiring significant hardware modifications. In this paper, we present an experimental demonstration of high speed scanning on an existing, non-high speed instrument, through the use of a feedback-based, feature-tracking algorithm that reduces imaging time by focusing on features of interest to reduce the total imaging area. Experiments on both circular and square gratings, as well as silicon steps and DNA strands show a reduction in imaging time by a factor of 3-12 over raster scanning, depending on the parameters chosen.

Concepts: Frame rate, NTSC, Refresh rate

0

In this Letter, simultaneous imaging of flow and sound by using parallel phase-shifting interferometry and a high-speed polarization camera is proposed. The proposed method enables the visualization of flow and sound simultaneously by using the following two factors: (i) injection of the gas, whose density is different from the surrounding air, makes the flow visible to interferometry, and (ii) time-directional processing is applied for extracting the small-amplitude sound wave from the high-speed flow video. An experiment with a frame rate of 42,000 frames per second for visualizing the flow and sound emitted from a whistle was conducted. By applying time-directional processing to the obtained video, both flow emitted from the slit of the whistle and a spherical sound wave of 8.7 kHz were successively captured.

Concepts: Electromagnetic radiation, Frequency, Sound, Video, Frame rate, NTSC, Refresh rate, Telecine

0

Single-pixel imaging uses a single-pixel detector, rather than a focal plane detector array, to image a scene. It provides advantages for applications such as multi-wavelength, three-dimensional imaging. However, low frame rates have been a major obstacle inhibiting the use of computational ghost imaging technique in wider applications since its invention one decade ago. To address this problem, a computational ghost imaging scheme, which utilizes an LED-based, high-speed illumination module is presented in this work. At 32 × 32 pixel resolution, the proof-of-principle system achieved continuous imaging with 1000 fps frame rate, approximately two orders larger than those of other existing ghost imaging systems. The proposed scheme provides a cost-effective and high-speed imaging technique for dynamic imaging applications.

Concepts: Rates, IMAGE, Image processing, Reaction rate, Pixel, Frame rate, NTSC, 24p

0

The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper.

Concepts: Scientific method, Mars, Meteorite, Video, Frame rate, Sky, NTSC, Telecine

0

For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extend dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with state-of-the-art highdynamic- range video methods.

Concepts: Problem solving, Frame rate, Persistence of vision, NTSC, Interlace, Telecine, Progressive scan, Deinterlacing

0

Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.

Concepts: Central nervous system, Nervous system, Time, Brain, Perception, Video, Frame rate, NTSC

0

Many studies have investigated the effect of dynamic message signs (DMS) on drivers' speed reduction and compliance in work zones, yet only a few studies have examined the design of sign content of DMS. The purpose of this study was to develop design standards for DMS to improve driver compliance and worker safety. This study investigated the impact of sign content, frame refresh rate, and sign placement on driver speed reduction, compliance, and eye movements. A total of 44 participants were recruited for this study. Each participant completed 12 simulated driving tasks in a high-fidelity driving simulator. A small-scale field study was also conducted to test the effect of DMS on vehicle speed in a highway work zone. Results showed sign content and placement had no impact on speed reduction and compliance. However, sign frame refresh rate was found to have a significant effect on drivers' initial speed and speed reduction. Participants had longer fixation duration on DMS when worker presence was mentioned in the sign content. Results of the field study suggested that the DMS is most effective at night.

Concepts: Participation, ARIA Charts, Zone, Frame rate, NTSC, Refresh rate, Interlace, Telecine

0

The frame rate of the digital high-speed video camera was 2000 frames per second (fps) in 1989, and has been exponentially increasing. A simulation study showed that a silicon image sensor made with a 130 nm process technology can achieve about 1010 fps. The frame rate seems to approach the upper bound. Rayleigh proposed an expression on the theoretical spatial resolution limit when the resolution of lenses approached the limit. In this paper, the temporal resolution limit of silicon image sensors was theoretically analyzed. It is revealed that the limit is mainly governed by mixing of charges with different travel times caused by the distribution of penetration depth of light. The derived expression of the limit is extremely simple, yet accurate. For example, the limit for green light of 550 nm incident to silicon image sensors at 300 K is 11.1 picoseconds. Therefore, the theoretical highest frame rate is 90.1 Gfps (about 1011 fps).

Concepts: Time, Function, Optics, Wavelength, Digital photography, Image sensor, Frame rate, NTSC