Tag Archives: NASA

Google’s Project Tango headed to International Space Station

 

Source from: http://www.pcworld.com/article/2110660/googles-project-tango-headed-to-international-space-station.html

Google’s Project Tango, the prototype smartphone packed with sensors so it can learn and sense the world around it, is heading to the International Space Station.

Two of the Tango phones are due to be launched to the ISS on the upcoming Orbital 2 mission, which is scheduled to launch in May and take supplies to the station. The phones will be used as part of a NASA project that’s developing robots that could one day fly around the inside or outside of the space station, or even be used in NASA’s planned mission to land on an asteroid.

Work on the robots is already going on at NASA’s Ames Research Center in Silicon Valley, and this week the space agency let a small group of reporters visit its lab and see some of the research.

spheres space station
Three Spheres satellites float inside the International Space Station.

The phones, which are being supplied to a limited number of developers at present, were unveiled by Google a month ago. They include several cameras and infrared range-finding so the phone can build up a three-dimensional model of its surroundings—a significant difference from current handsets that can see only a two-dimensional world through a single camera.

Google has already shown the phones being used to build up a detailed map of the interior of a home or office, but NASA has much bigger plans. At Ames, which is just minutes from Google’s Mountain View headquarters, researchers have attached a Tango handset to a robot development platform called a “Sphere.”

Technically an 18-sided polyhedron, each Sphere is about the size of a soccer ball and self-powered. They can free-fly around the inside of the ISS thanks to carbon dioxide-powered thrusters, said Chris Provencher, Smart Spheres project manager at NASA.

The Spheres have already been used in developing autonomous equipment. The space agency conducted a Spheres test with a Nexus S smartphone as part of Shuttle mission STS-135 in 2011, but the Tango phones promise more capabilities.

“We are researching how effective Project Tango’s vision-based navigation capabilities are for performing localization and navigation of a mobile free flyer on ISS,” said Andres Martinez, Spheres Manager at NASA.

“Specifically, we are researching how well the 3-D modeling and visual odometry can be used to let the [Spheres] free flyer learn its environment and maneuver through it based on what it sees,” said Martinez. “This is in contrast to the current Spheres localization system, which relies on fixed sensors in the environment to help the Spheres track its position.”

On Monday, NASA Administrator Charles Bolden saw a demonstration of the Tango-equipped Spheres during a visit to Ames. One was connected to a Spheres satellite, which was slowly gliding across a huge granite table in a laboratory.

There are already three Spheres units on the ISS.

Hearing that researchers are working toward a robot that would autonomously fly around the inside and possibly outside of the ISS carrying out checks, Bolden asked if the same technology could be put to use on NASA’s planned asteroid mission. The space agency wants to approach and capture a piece of an asteroid, and Bolden wondered if the work could form the base of a robot that could approach, analyze and help identify a target for the mission.

That could be so, said Provencher.

Researchers hit upon the idea of using smartphones in their development work when they realized the features they wanted—Wi-Fi, a camera, more processing power—were all present in an off-the-shelf device.

The phones in use by NASA have had several minor modifications. The lithium-ion battery pack has been removed, the phone is powered by six AA batteries and the cellular radio chip has also been removed to put it into “the ultimate airplane mode,” said Provencher. A cover has also been put over the screen to contain pieces of glass should it be shattered.

(Additional reporting by Melissa Aparicio in San Francisco.)

A close(ish) encounter with Voyager 2

Proudly source from: http://robohub.org/a-closeish-encounter-with-voyager-2/

It is summer 1985. I’m visiting Caltech with colleague and PhD supervisor Rod Goodman. Rod has just been appointed in the Electrical Engineering Department at Caltech, and I’m still on a high from finishing my PhD in Information Theory. Exciting times.

Rod and I are invited to visit the Jet Propulsion Labs (JPL). It’s my second visit to JPL. But it turned into probably the most inspirational afternoon of my life. Let me explain.After the tour the good folks who were showing us round asked if I would like to meet some of the post-docs in the lab. As he put it: the fancy control room with the big wall screens is really for the senators and congressmen – this is where the real work gets done. So, while Rod went off to discuss stuff with his new Faculty colleagues I spent a couple of hours in a back room lab, with a Caltech post-doc working on – as he put it – a summer project. I’m ashamed to say I don’t recall his name so I’ll call him Josh. Very nice guy, a real southern californian dude.

Now, at this point, I should explain that there was a real buzz at JPL. Voyager 2, which had already more than met its mission objectives was now on course to Uranus and due to arrive in January 1986. It was clear that there was a significant amount of work in planning for that event. The first ever opportunity to take a close look at the seventh planet.

So, Josh is sitting at a bench and in front of him is a well-used Apple II computer. And behind the Apple II is a small display screen so old that the phosphor is burned. This used to happen with CRT computer screens – it’s the reason screen savers were invented. Beside the computer are notebooks and manuals, including prominently a piece of graph paper with a half-completed plot. Josh then starts to explain: one of the cameras on Voyager 2 has (they think) a tiny piece of grit* in the camera turntable – the mechanism that allows the camera to be panned. This space grit means that the turntable is not moving as freely as it should. It’s obviously extremely important that when Voyager gets to Uranus they need to be able to point the cameras accurately, so Josh’s project is to figure out how much torque is (now) needed to move the camera turntable to any desired position. In other words: re-calibrate the camera’s controller.

At this point I stop Josh. Let me get this straight: there’s a spacecraft further from earth, and flying faster, than any manmade object ever, and your summer project is to do experiments with one of its cameras, using your Apple II computer. Josh: yea, that’s right.

Josh then explains the process. He constructs a data packet on his Apple II, containing the control commands to address the camera’s turntable motor and to instruct the motor to drive the turntable. As soon as he’s happy that the data packet is correct, he then sends it – via theRS232 connection at the back of his Apple II – to a JPL computer (which, I guess would be a mainframe). That computer then, in turn, puts Josh’s data packet together with others, from other engineers and scientists also working on Voyager 2, after – I assume – carefully validating the correctness of these commands. Then the composite data packet is sent to theDeep Space Network (DSN) to be transmitted, via one of the DSNs big radio telescopes, to Voyager 2.

Then, some time later, the same data packet is received by Voyager 2, decoded and de-constructed and said camera turntable moves a little bit. The camera then sends back to Earth, again via a composite data packet, some feedback from the camera – the number of degrees the turntable moved. So a day or two later, via a mind-bogglingly complex process involving several radio telescopes and some very heavy duty error-correcting codes, the camera-turntable feedback arrives back at Josh’s desktop Apple II with the burned-phosphor screen. This is where the graph paper comes in. Josh picks up his pencil and plots another point on his camera-turntable calibration graph. He then repeats the process until the graph is complete. It clearly worked because six months later Voyager 2 produced remarkable images of Uranus and its moons.

This was, without doubt, the most fantastic lab experiment I’d ever seen. From his humble Apple II in Pasadena Josh was doing tests on a camera rig, on a spacecraft, about 1.7 billion miles away. For a Thunderbirds kid, I really was living in the future. And being a space-nerd I already had some idea of the engineering involved in NASA’s deep space missions, but that afternoon in 1985 really brought home to me the extraordinary systems engineering that made these missions possible. Given the very long project lifetimes – Voyager 2 was designed in the early 1970s, launched in 1977, and is still returning valuable science today – its engineers had to design for the long haul; missions that would extend over several generations. Systems design like this requires genius, farsightedness and technical risk taking. Engineering that still inspires me today.

*it later transpired that the problem was depleted lubricant, not space grit.

Meet NASA’s Futuristic Drone Research Lab

Source from: http://spectrum.ieee.org/automaton/robotics/aerial-robots/unmanned-aerial-systems-at-nasa-dryden

There is a link to NASA Dryden which is temporary shut-down but others are working. :)

Screen Shot 2013-10-15 at 12.32.15 PM

NASA Dryden ]

 

Last week, NASA and AUVSI invited a carefully selected, elite group of media (which obviously included IEEE Spectrum) to take a tour of the Unmanned Aircraft Systems programs at NASA Dryden. The Dryden Flight Research Center (DFRC) is located approximately in the middle of nowhere, inside Edwards Air Force Base on a huge dry lake bed out in the Mojave desert. The remoteness of the area, plus the availability of over 100 square kilometers of empty flat lake bed to land on if necessary, makes Dryden a fantastic place to test out all kinds of futuristic and occasionally bizarre aircraft. And we got to meet a few of them.

Let’s start with Ikhana, Dryden’s Predator B UAV. You’re probably familiar with the Predator drone, but the Predator B is significantly larger and more powerful. The military’s version of this UAV is the MQ-9 Reaper, which is capable of carrying 15 times the payload at three times the speed of the Predator A, for up to 30 hours at a stretch. “Ikhana” is a Choctaw Native American word that means “intelligent, conscious, or aware,” and appropriately enough, Dryden uses Ikhana primarily for remote sensing and monitoring. The drone is often used to map wildfires, for example.

This is what flying Ikhana is like, almost. The setup above is a simulator, and includes features that you won’t find in an actual Predator control station, like a wide angle forward view. At Dryden, they experiment with things like this to see how valuable different cameras or data might be to remote pilots. In other words, they’re trying to figure out what displays a drone pilot needs to effectively operate in national airspace.

When flying Ikhana around Dryden, pilots get to use a very low-latency radio connection for direct control using a joystick, just like in a video game. Away from Dryden, Ikhana is communicated with over a satellite connection, and flies autonomously via waypoint control. It’s still possible to use a joystick to control the drone in an emergency, but the 1.5 second latency makes it “basically unflyable.” Pilots practice it anyway, though, just in case there’s an emergency and they need to land the drone by hand.

The next size up from Ikhana is one of Dryden’s Global Hawks:

This particular aircraft was the very first Global Hawk ever made, and had its first flight back in 1998. It’s quite a performer, though, able to cruise at 65,000 feet with a range of about 11,000 nautical miles. Even while based out of Dryden, it has no problem heading into the Atlantic or the Caribbean for extended weather monitoring, coming back home up to 30 hours later.

With a wingspan of 116 feet (that’s longer than a basketball court), the Global Hawk has nearly the same wingspan as a Boeing 737, and it can haul 1,500 pounds of payload, including deployable sensors that it can drop into particularly bad weather. For the last month or so, NASA has been partnering with NOAA’s National Hurricane Center to forecast hurricanes in the Gulf of Mexico.

Unlike Ikhana, the Global Hawks don’t have an option for human control. They’re almost completely autonomous, requiring only that a human tells them where to go. This means that when a Global Hawk is flying, most of the time the team of humans in control has very little to do. While we were at Dryden, NASA’s second Global Hawk was on its way back home from hurricane monitoring, so we were able to check out the frantic activity in the flight operations center:

Yup. Frantic activity. Some of the other photographers actually asked that these guys sit up and put their hands on their keyboards “to look like they’re doing something.” Generally, the Global Hawk just does its own thing from launch to landing, even down to parking itself back in front of the hangar and shutting its engine off. The humans are around to coordinate with the Federal Aviation Administration (FAA), and in case something unexpected happens.

 


 

In addition to these big unmanned aerial systems, NASA Dryden also works on a number of “subscale technology demonstrators” in their model shop.

This little guy is called DROID (Dryden Remotely Operated Integrated Drone). It’s been used as a testbed to develop something called the Automatic Ground Collision Avoidance System, which is a system that autonomously prevents aircraft from crashing into things when there are visibility or navigation problems. With a detailed terrain model on-board and a regular GPS, DROID can avoid flying into terrain, taking over for a human pilot when necessary. A key feature of this system is that it only takes over when necessary, and the only time a pilot would notice it in action is if it’s too late for a human to make an avoidance maneuver on their own.

Dryden has been testing this system in full-size F-16s as well, and it’s effective enough that human test pilots get really, really nervous flying towards mountains and waiting for the autonomous avoidance system to kick in. Eventually, we should see this kind of technology on civilian aircraft as well.

 

This next project out of the Dryden model shop is the Towed Glider Air-Launch Concept, which the NASA scientists are testing as a way of reducing the cost of sending satellites into orbit.

If you’re thinking to yourself, “hey, that just looks like two gliders stuck together,” that’s because it’s just two gliders stuck together. Here it is behind a DROID tow plane:

The idea here is that you’d sling your rocket and satellite underneath this robotic glider, tow it up to altitude, and then fire the rocket off from there, after which the glider would autonomously fly itself back to base. It would be a lot like Virgin Galactic’s White Knight 2, except it wouldn’t be self powered.

Why is this towing approach better than just attaching the rocket and satellite on a carrier aircraft? It’s actually fairly straightforward: with a towed glider, you have to worry much more about lift than about thrust. The glider can lift a rocket that’s twice its own weight, and with a tow plane like a 777, you could (potentially) tow a glider and payload combination of a million pounds (!). For practical launch capability, however, this concept might make more sense to be used for small- to medium-sized payloads.

The guys at Dryden working on this project seem slightly confused as to why nobody is doing this yet, because it seems like a very good idea: the glider would be two-thirds the size and one-fifth the cost of comparable carrier aircraft. They hope to get funding to build a larger glider plane to test out the concept within the next year.

 


 

Dryden is also home to all kinds of other incredible relics. Most of them aren’t robots, but for fans of aeronautical history, it’s a treasure trove. While taking a shortcut from the Global Hawk hangar to the model shop, our guide pointed through a tiny window into a dark and dusty room, and inside we caught a glimpse of one of the original Lunar Landing Research Vehicles (LLRVs), pictured below, along with the M2-F1, the first manned lifting body aircraft. After making a minor fanboy spectacle of myself, I convinced them to open up the room and let me take a few pictures:

The LLRV was a big jet engine pointed at the ground, with a frame around it. The engine was powerful enough to lift the entire vehicle into the air and balance it there, simulating low gravity flight for training astronauts on lunar landings. Here’s some video:

 

And here’s the M2-F1; I took the picture while crammed into a bit of a corner, since it was a small room stuffed with some big aircraft:

The M2-F1 has no wings, but the shape of the body of the aircraft creates enough lift to keep it airborne. Initial testing in the 1960s began by towing this thing at 120 miles per hour across the lake bed behind a souped-up Pontiac, and towed flight tests followed.

 

Here’s one last aircraft (a personal favorite of mine), which is parked outside of Dryden:

I’m not tall enough to show you why the X-29 is so cool, so here’s a picture from the top:

Those forward-swept wings and canards made the X-29 exceptionally agile, and (let’s be honest here) freakin’ sweet looking. And as we explained in an1985 IEEE Spectrum article, the unconventional wings also made the plane “unflyable without the aid of computers.” The X-29 is a good way to sort of summarize what Dryden is all about, when it comes down to it: turning crazy awesome ideas into actual flying aircraft.

We’ll be following up on some of these projects (especially the unmanned stuff, of course) as they progress, and if you have any specific questions about anything we’ve covered, let us know in the comments and we’ll do our best to get you some specific answers.

NASA Dryden ]