Tag Archives: Quadrotor

Dynamic Vision Sensors(DVS) Enable High-Speed Maneuvers With Robots

I think these technologies will boom up in a near future with dynamic range camera (DRC).

Original source from: http://spectrum.ieee.org/automaton/robotics/robotics-hardware/dynamic-vision-sensors-enable-high-speed-maneuvers-with-robots

 

We love watching quadrotors pull off amazing high-speed, high-precision acrobatics as much as anyone. But we’re also the first to point out that almost without exception, stuff like this takes place inside a controlled motion-capture environment, and that the quadrotors themselves are blind bots being controlled entirely by a computer somewhere that’s viewing the entire scene at a crazy framerate and from all angles through an expensive camera setup.

It’s going to take something new and innovative for robots to be able to perform high-speed maneuvers outside of a lab. Something like a special kind of camera called a Dynamic Vision Sensor (DVS) that solves the problems that conventional vision systems face when dealing with rapid motion.

Conventional video cameras are bad at motion because of the way that they capture data. They’re basically just still cameras that pump out a whole bunch of pictures (frames) every second. Each one of these frames contain data that’s integrated over the entire period of time that the camera shutter was open for, which is fine, except that you have the same problem that still cameras have: if there’s something in the frame that’s moving appreciably during the time that the shutter of your camera is open for, it blurs itself.

Most of the time, this isn’t an issue for robots (or people), because we’re not attempting (or observing) high-speed maneuvers. But flying robots that aremoving at very high speeds need a better solution to keep track of where they are, since it’s hard to keep track of your environment when your camera is telling you that everything around you is one giant smear of pixels.

A DVS is a special type of camera that says, “okay, if we’re going to be moving really fast, we don’t care about anything except for the relative motion of things around us.” Instead of sending back frames, a DVS transmits data on a per-pixel basis, and only if it detects that the pixel has changed.

In other words, it’ll send back an outline of everything that’s changing at a very high temporal resolution (microsecond), taking care of both latency and motion blur. Here it is in action in a 2008 implementation, balancing a pencil:

 

 

And here’s a description of how it works, in the context of being used to estimate the pose of a quadcopter that’s doing flips at up to 1,200 degrees per second:

 

 

The spatial resolution of the camera used on the robot (a modified AR Drone, if you’re wondering) is only 128×128 pixels, but its temporal resolution is in single-digit microseconds. The OptiTrack cameras you see up on the walls were just used for recording ground truth data. Over 25 trials, the DVS and control system accurately tracked the robot 24 times, for an overall accuracy of 96 percent. Not bad.

At this point, the agility of independent aerial robots is limited almost entirely by the latency of onboard sensing systems, and from the sound of things, using a DVS solves that problem, at least for vision. Future work from these researchers at the University of Zurich will focus on increasing the resolution of the DVS, teaching it to work in arbitrary environments, and implementing closed-loop control.

“Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers,” by Elias Mueggler, Basil Huber, and Davide Scaramuzza from the University of Zurich, was presented last month at IROS 2014 in Chicago.

This Quadrotor Uses Google’s Project Tango to Fly Autonomously

Original source from: http://spectrum.ieee.org/automaton/robotics/aerial-robots/autonomous-quadrotor-flight-based-on-google-project-tango

I applied to get Google’s Tango phone but no success. It seems to they only chose research groups within U.S soil (Correct me if I’m wrong.)

Recently, GRASP Lab. from Upenn demonstrated a quadrotor powered by Tango phone. At a glance, it looks pretty stable and well-working. Disturbance rejection and fast motion are properly handled and the flyer rapidly stabilised.

As they said, researchers are now evaluating accuracy of 30Hz state estimation coming from the Tango phone with motion capture ground truth and I can’t wait to see the performance.

It might have weak research motivations if they only put the phone on top of the flyer since it already has been done using Kinect-fashion RGBD-sensors 3 years ago. Another interesting point is that it might work only within close-proximity due to its small baseline.

Here is the video from Minnesota University using Tango Phone for 3D mapping on a quadrotor platform.

 

 

and the article from IEEE Spectrum.

 

Image: UPenn/GRASP Lab

Early this year, Google unveiled its Project Tango smartphone, a mobile device equipped with a depth sensor, a motion tracking camera, and two vision processors that let the phone track its position in space and create 3D maps in real time. The device is particularly useful for robots, which have to navigate and locate themselves in the world. Indeed, a video showed how Google and its partners were putting the smartphone on different kinds of robots, including mobile platforms and manipulator arms.

Now researchers at the University of Pennsylvania led by Professor Vijay Kumar are taking things one step further. After getting a Tango device from Google, they put it on one of their quadrotors and let it loose inside their lab.

Kumar says that a big challenge for researchers working with flying robots is not building them but rather developing hardware and software capable of making them autonomous. Many robots use GPS for guiding themselves, or, when flying indoors, they rely on motion tracking systems like Vicon and OptiTrack, which offer great accuracy but requires that you install sensors on walls and ceilings.

A device capable of localizing itself in space without GPS or external sensors, as the Tango phone does, opens new possibilities for flying robots. Kumar says that the Google device is remarkable because it lets you “literally velcro it to a robot and have it be autonomous.”

Giuseppe Loianno, a PhD student in Kumar’s group, has made a video showing their initial tests with the device. In the first part of the video, Loianno sets the quadrotor to hover at a fixed position and then perturbs it by moving it around, but the drone promptly returns to the starting point. Next Loianno commands the drone to go to different places in the room and, even if disturbed, the drone recovers and stays on its programmed path.

 

 

Kumar says the only measurement from the Tango phone is its pose, which is the position plus orientation with reference to a starting coordinate system (captured at a rate of 30 Hz), and the only other sensor used is the IMU onboard the drone. (The laptop is not controlling flight autonomy in any way; it’s only used to send a desired trajectory to the drone and to render a visualization of the its positions in space. And the quadrotor is a machine that Kumar’s group designed and built with off-the-shelf components.)

The researchers now plan to study Tango’s accuracy of localization (and compare it to external motion tracking systems), but from their initial tests they estimate the accuracy to be within a centimeter. If that proves to be the case (and if Tango can be made cheap enough), it will be an impressive capability for the Google device, which could revolutionize how mobile robots and drones navigate indoor spaces.

Kumar says that the convergence of computation, communication, and consumers has a huge potential for the robotics industry, and a device like Tango is a key advance because it’s “lowering the barrier to entry for autonomous robots.”