Category Archives: Sensor

What is the difference between multispectral and hyperspectral imagery?

Original source from: http://www.extension.org/pages/40073/what-is-the-difference-between-multispectral-and-hyperspectral-imagery#.VUGDZdOqpBc

Multispectral imagery is produced by sensors that measure reflected energy within several specific sections (also called bands) of the electromagnetic spectrum. Multispectral sensors usually have between 3 and 10 different band measurements in each pixel of the images they produce. Examples of bands in these sensors typically include visible green, visible red, near infrared, etc. Landsat, Quickbird, and Spot satellites are well-known satellite sensors that use multispectral sensors. Hyperspectral sensors measure energy in narrower and more numerous bands than multispectral sensors. Hyperspectral images can contain as many as 200 (or more) contiguous spectral bands. The numerous narrow bands of hyperspectral sensors provide a continuous spectral measurement across the entire electromagnetic spectrum and therefore are more sensitive to subtle variations in reflected energy. Images produced from hyperspectral sensors contain much more data than images from multispectral sensors and have a greater potential to detect differences among land and water features. For example, multispectral imagery can be used to map forested areas, while hyperspectral imagery can be used to map tree species within the forest.

 

MultispectralComparedToHyperspectralCourtesy: http://en.wikipedia.org/wiki/Hyperspectral_imaging

 

Dynamic Vision Sensors(DVS) Enable High-Speed Maneuvers With Robots

I think these technologies will boom up in a near future with dynamic range camera (DRC).

Original source from: http://spectrum.ieee.org/automaton/robotics/robotics-hardware/dynamic-vision-sensors-enable-high-speed-maneuvers-with-robots

 

We love watching quadrotors pull off amazing high-speed, high-precision acrobatics as much as anyone. But we’re also the first to point out that almost without exception, stuff like this takes place inside a controlled motion-capture environment, and that the quadrotors themselves are blind bots being controlled entirely by a computer somewhere that’s viewing the entire scene at a crazy framerate and from all angles through an expensive camera setup.

It’s going to take something new and innovative for robots to be able to perform high-speed maneuvers outside of a lab. Something like a special kind of camera called a Dynamic Vision Sensor (DVS) that solves the problems that conventional vision systems face when dealing with rapid motion.

Conventional video cameras are bad at motion because of the way that they capture data. They’re basically just still cameras that pump out a whole bunch of pictures (frames) every second. Each one of these frames contain data that’s integrated over the entire period of time that the camera shutter was open for, which is fine, except that you have the same problem that still cameras have: if there’s something in the frame that’s moving appreciably during the time that the shutter of your camera is open for, it blurs itself.

Most of the time, this isn’t an issue for robots (or people), because we’re not attempting (or observing) high-speed maneuvers. But flying robots that aremoving at very high speeds need a better solution to keep track of where they are, since it’s hard to keep track of your environment when your camera is telling you that everything around you is one giant smear of pixels.

A DVS is a special type of camera that says, “okay, if we’re going to be moving really fast, we don’t care about anything except for the relative motion of things around us.” Instead of sending back frames, a DVS transmits data on a per-pixel basis, and only if it detects that the pixel has changed.

In other words, it’ll send back an outline of everything that’s changing at a very high temporal resolution (microsecond), taking care of both latency and motion blur. Here it is in action in a 2008 implementation, balancing a pencil:

 

 

And here’s a description of how it works, in the context of being used to estimate the pose of a quadcopter that’s doing flips at up to 1,200 degrees per second:

 

 

The spatial resolution of the camera used on the robot (a modified AR Drone, if you’re wondering) is only 128×128 pixels, but its temporal resolution is in single-digit microseconds. The OptiTrack cameras you see up on the walls were just used for recording ground truth data. Over 25 trials, the DVS and control system accurately tracked the robot 24 times, for an overall accuracy of 96 percent. Not bad.

At this point, the agility of independent aerial robots is limited almost entirely by the latency of onboard sensing systems, and from the sound of things, using a DVS solves that problem, at least for vision. Future work from these researchers at the University of Zurich will focus on increasing the resolution of the DVS, teaching it to work in arbitrary environments, and implementing closed-loop control.

“Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers,” by Elias Mueggler, Basil Huber, and Davide Scaramuzza from the University of Zurich, was presented last month at IROS 2014 in Chicago.

Maybe we won’t need to wear a reading glasses while working with a computer like screen in near future.

Recently, an interesting work has been published in SIGGRAPH2014, named “Vision-correcting display lets users ditch their reading glasses”. I can’t quite understand the principle behind this, but it seems to allow people working such as reading an articles or writing on a paper without glasses.

If you are interested in this, have a look the below article from

http://www.gizmag.com/vision-correcting-display-glasses-obsolete/33173/

Vision-correcting display lets users ditch their reading glasses

Researchers at UC Berkeley claim to have created a vision-correcting matrix for display sc...

Researchers at UC Berkeley claim to have created a vision-correcting matrix for display screens

Image Gallery (5 images)

We’ve seen a number of glasses-free 3D technologies in recent years, most famously in Nintendo’s 3DS, but now researchers at the University of California at Berkeley and MIT have created a prototype device that allows those with vision problems to ditch their eyeglasses and contact lenses when viewing regular 2D computer displays by compensating for the viewer’s visual impairment.

The prototype device consists of a screen printed with a matrix of pinholes measuring just 75 microns in diameter and separated by gaps 390 microns wide. This printed pinhole screen was then inserted between two layers of clear acrylic and attached it to an iPod display. Using an algorithm that takes into account a person’s eyeglasses prescription, the screen is able to compensate for an individual’s specific visual impairment by adjusting the intensity and direction of the light emitted from each screen pixel.

In this way, by way of a technique called deconvolution – a process of reversing optical distortion similar to that used to correct images on the Hubble telescope’s distorted mirror – the light from the image that passes through the pinhole matrix will be perceived by the user as a sharp image.

In a trial to test this set up, the team used a camera whose lens was adjusted to emulate farsightedness in a human eye, and then displayed images to that camera. When the image-correcting matrix was placed between the screen and the observing camera, the image resolved into sharp focus.

“Our technique distorts the image such that, when the intended user looks at the screen, the image will appear sharp to that particular viewer,” said Brian Barsky, UC Berkeley professor of computer science and vision science who is leading the project. “But if someone else were to look at the image, it would look bad.”

Currently, the device also requires that the viewer remain in a fixed position for the matrix to be effective. However, Fu-Chung Huang, who is lead author of the study, says that eye-tracking technology could be used in future to allow the displays to adapt to the position of the viewer’s head. He says the team also hopes to add multi-way correction that would allow users with different visual problems to view sharp images on the same display.

In addition to common problems such as farsightedness, the team says the technology could one day also help those with more complex problems, known as high order aberrations, which eyeglasses and contact lenses are unable to correct.

“We now live in a world where displays are ubiquitous, and being able to interact with displays is taken for granted,” said Barsky. “People with higher order aberrations often have irregularities in the shape of the cornea, and this irregular shape makes it very difficult to have a contact lens that will fit. In some cases, this can be a barrier to holding certain jobs because many workers need to look at a screen as part of their work. This research could transform their lives, and I am passionate about that potential.”

The research team’s will present their findings at SIGGRAPH 2014 (Special Interest Group on Graphics and Interactive Techniques) conference in Vancouver, Canada on August 12th this year. Their paper is published in the journal ACM Transactions on Graphics.

The following video shows the prototype screen in use, and explains some of the background to its development.

Source: UC Berkeley