Wednesday, September 28, 2005

The Grand Challenge

Computers, start your engines. The DARPA Grand Challenge is about to begin. Starting September 29 and continuing through October 6, unmanned vehicles will attempt to drive 150 miles across a desert in 10 hours. The vehicles will not be remotely driven. They must navigate solely on the strength of their onboard computers and sensors across the course and to the finish line. The sponsor, DARPA, is the Defense Advanced Research Projects Agency. But the contestants come from all over the United States and from several foreign countries. You can follow the progress of the race at It contains links to individual teams and their team blogs.

The Seattle Times says this:

The military sponsors the race to speed the development of unmanned vehicles for combat. The project had an inauspicious start: Last year's inaugural contest ended soon after it began when the robots careened off course or abruptly stalled. One even got tangled in barbed wire. ... This year's race shows signs of being extremely competitive. Some vehicles have logged hundreds of self-guided miles in the Southwestern desert during summer practice runs. Several even tested on last year's course ... Vehicles will have to drive on dirt and gravel, maneuver mountain switchbacks, squeeze through choke points and avoid man-made and natural obstacles.

Carnegie Mellon's Red Team (brought to you by Caterpiller) has video links here and they are heavy downloads. One of the entrants is a robotic motorcycle. Check out the Blue Team at, which has video of their robotic contestant swimming underwater (Sort of. Mutley needed here.), doing ramp jumps and other cool stuff here. Any ideas on who'll finish first?


Blogger buck smith said...

I am going to DARPA challenge next week with my son! Really looking forward to that...

9/28/2005 05:26:00 AM  
Blogger Goesh said...

The enemy has had robotic fighters for some time - homicide bombers.

9/28/2005 06:40:00 AM  
Blogger desert rat said...

but they are not "smart" bombs

9/28/2005 06:52:00 AM  
Blogger The Mad Fiddler said...

Is not DARPA the group we have to thank for doing some of the original development work that created what we now call the internet?


Hmmm. Sounds like a Himalayan Sherpa derivation. But the only person I've met that wrote some of the original coding had a Scottish name.

This is way too early.

9/28/2005 07:08:00 AM  
Blogger kstagger said...

Robotic machines will be good for broad patrol and warfare duties, minimizing casualties in dangerous locations. But sometimes you need someone on the ground to poke their head into the spider-hole and take a look.

I can see a modern war machine powered by Robotic machines taking the brunt of the heavy fighting, with tactical decisions being made behind the lines. As far as battling an Insurgency, their role would be limited to breaking up defensive positions within towns.

9/28/2005 07:53:00 AM  
Blogger Annoy Mouse said...

The Defense Advanced Research Projects Agency always has their hands into some really cool stuff. The kind of thing they do is seed things that our not quite ready for prime time. This motivates some of the best innovation and attracts a lot of scientists. The good folks themselves at DARPA, though a bureaucracy, are some the most intelligent themselves.

Interestingly, during the Clinton years, the name was changed from DARPA to ARPA (’96) [was ARPA before] , drop the Defense, for all so politically correct reasons. I begun calling it GARPA. You can guess what the “G” stood for. And then, some genius no doubt, decided that the “D” somewhat justified their existence and is where the 2 billion dollar budget comes from.

SciFi tells us that letting computers network together is like introducing 2 of your enemies to each other but ARPAnet turned out pretty cool.

9/28/2005 08:16:00 AM  
Blogger Annoy Mouse said...

As war fighting machines robots are already being used tactically with EOD teams. It’s the semi-autonomy that makes them particularly useful. At present, Unmanned Air Vehicles have range guided take-off and landing capability, but all the operator has to worry about is ‘dig’n in the GPS coordinates on his mapping software and the UAV does it. Semi-autonomy in the air is relatively simple, one must just avoid controlled flight into terrain, the operator is busy running the sensor suite, pipe lining data to command, and firing Hell Fire missiles. One hell of a computer game.

On the other hand Unmanned Robotic Vehicles have a real hard time negociating terrain, but pack ‘em with sensors and you have an excellent way to ferret out IEDs. They are used by police in hostage situations, and, you know, lots of applications will be realized when a vehicle can more or less figure things out for itself on the way.

9/28/2005 08:26:00 AM  
Blogger Vercingetorix said...

The motorcycle team was quoted before last year’s Grand Challenge as designing a ‘hunter-killer’ model, a high-speed system to seek out, close with, and destroy the enemy. That’s along the lines of why they made it a motorcycle.

Now, autonomous vehicles can serve as decoys which armored or infantry troops will have to waste ammo, expose their positions, or both. This is a derivation of the old reece technique of engaging the enemy to find out where and how many there are. Should one be destroyed, it is possible to triangulate the enemy position(s) (with acoustic sensors, etc,) and fire back, maybe instantly. If the enemy does not take the bait, these vehicles can assume over-watch positions which will keep the enemies heads down, again.

The old lie is that there are ghost-like militaries that keep growing and growing, in danger, skill, and size. The obvious counter to the ghost, the being that is everywhere you are not looking, is to look everywhere. These autonomous vehicles simply provide a further evolution in tactics and capability that forces the enemy into hiding or running or exposing themselves, and is not likely to be effectively countered with guile. At some point, the stakes are raised so high that the low-rollers simply have to leave the table; these autonomous vehicles may help do that.

My favorite: Carnegie Mellon

9/28/2005 08:32:00 AM  
Blogger Ken Wheaton said...

Smart money is on a repeat of last year. As amazing as the projects are, the course is extremely tough and remember, these vehicles cannot be remote controlled. They have to be fully autonomous. But out of hometown pride, I'm gonna go with Team Cajunbot.

9/28/2005 08:45:00 AM  
Blogger ed said...


I still think a professional televised combat league involving remotely piloted vehicles, armored and unarmored, fighting each other with rockets, cannon and machine guns would seriously kick ass.

Make NFL style league play with international teams, highly varied terrain and UAVs with cameras to catch the action, and that's ratings gold.

Let's face it. Most of us watch the History Channel to see things explode.

9/28/2005 09:22:00 AM  
Blogger Tony said...

2002 article: Robot Wars for Real

In this article, they set 'predator' robots after 'prey' robots. The Prey live on solar power cells, the Predators suck the juice out of the Prey's batteries. Wonder how it all turned out?

The motorcycle bot is amazing, I guess it has gyros in those boxes to keep it upright?

This whole topic is what I'd like to hear Ray Kurzweil talk about. I'll have to buy his book, maybe he does talk about it. Hey Ray - HAL was scary but Terminator rocks!

9/28/2005 10:42:00 AM  
Blogger Nathan said...


I believe the motorcycle uses Crossbow accelerometers to sense attitude changes, and then steers the front wheel by an amount proportional to the change in order to keep the bike upright.

The Blue Team has put up with a lot of naysaying and ridicule from faculty, students, engineers and potential sponsors for their unorthodox vehicular form, which most controls and dynamics engineering undergraduates would immediately recognize as introducing totally unnecessary control problems without any benefit that is particularly relevant to the Grand Challenge itself. In other words, the Blue Team has been playing for attention somewhat more than it has been playing to win.

But this does not really matter as long as the system works.

In particular, Blue Team has a critical weakness with process control. This is common to academic environments that are not oriented towards developing a product, especially one that has to meet reliability requirements. I believe this is evidenced by the failure that occurred last year, which was explained to me in person by Anthony Levandowski himself. The operational plan did not account for contingencies. The DARPA operator was not properly trained. The robot was not robust to these failures, and was in fact even more prone to catastrophic failure resulting from these individual failures- as a consequence of being a motorcycle.

Some of the more egregious issues have been addressed. For instance, the robot now features a "training wheel" that can deploy itself in an emergency or folds up against the side of the vehicle when not needed. But we will have to wait and see if the process control issues, particularly operational planning, guidelines and operator training are addressed by the team in within the next few weeks before the race begins.

9/28/2005 12:09:00 PM  
Blogger Mike H. said...

Spirit and Opportunity both have autonomous mode for marrain navigation.

And yes arpanet was where it started, Algore was there. If you don't believe him, you'll get the real story if you just ask him.

9/28/2005 12:10:00 PM  
Blogger Tony said...


Thanks for the clarification, that sounds like a wiggly ride.

9/28/2005 01:01:00 PM  
Blogger Hanba'al said...

I think Nathan has nailed the issue why the Blue Team has such problem. The first thing I went there to look is to find out who's behind each team. The Red team has professionals from corporate sponsors and have experienced design robots for exploration. They have an edge over the Blue team assumed all talent is equal. My money is on the Red.

9/28/2005 01:50:00 PM  
Blogger Nathan said...

The Red Team has quite a lot more going for it than readily meets the eye. Carnegie Mellon has been the center of DARPA-funded autonomous ground robot development since the early 1980s. As a result, DARPA has already poured millions of dollars into CMU and up until the Grand Challenge was announced, had relatively little to show for the investment. I believe that the Grand Challenge was conceived less as an incentive for other groups, but more as an attempt to force CMU's hand. DARPA has a vested interest in seeing that Carnegie Mellon does not merely win the race, but blows all the other competition out of the water.

I do not believe that Red Team has any particular overwhelming talent or experience to really distinguish it from most of the other top-tier teams, many of whom also have corporate sponsorship and consist of cutting edge roboticists, professional engineers, off-road racers, and other high-powered team assets. The Red Team does have a lot of associated corporate brand names, and as a result its biggest assets are its connections and easy access to money, equipment, information, and other resources. In terms of real cost, I would guess that Carnegie Mellon is probably spending at least one order of magnitude greater than the next team.

I would be rather less interested in the plain results of the race than in a somewhat subjective analysis of architecture versus performance. For instance, it may turn out that one team (perhaps the Red Team) wins the race but in doing so employs some vast and costly array of difficult-to-maintain sensors unobtainable by anybody else. On the other hand, another team using only a pair of cameras may suffer a GPS system crash on the 150th mile, putting it out of the race. It will probably be shown that different teams developed "best" implementations of different systems. The question is whether at the end of the race the "best" implementations of individual systems will be identified regardless of the performance of the integrated systems. I believe this may be where the real value of this competition lies.

9/28/2005 03:50:00 PM  
Blogger Tony said...


Thanks for your analysis on this, keep writing.

I'm still stuck on how the ghostrider stays upright. It obviously DOES stay up and exhibit robust stability - I looked at the film clips on the gimbal testing but it seems that's all about how to keep the sensors oriented, not how they keep the bike stable like they do.

The gimbal certainly turns quickly enough to twitch the front wheel around in any eventuality, I just find it odd that front-wheel turns is enough to stay upright. It wasn't enough for me when I was on a motorcycle, I can testify to that.

Keep writing.

9/28/2005 04:13:00 PM  
Blogger RWE said...

Traditionally, we often have attributed personalities to our ships, aircraft, jeeps, tanks, even favorite rifles.
In fact, I know damn well that my almost 60 year old aircraft has a personality distinct from others of its type.
It will be interesting to see how close the man-machine interface becomes when the machines really do have "personalities".

9/28/2005 04:21:00 PM  
Blogger Hanba'al said...


if you look at the link of Wired Magazines last year about the race, there are other teams spending $ at par or even higher than the Red Team. Here is the Link

The Blue Team spends the least (1/3 of the average) and they are the only one enter the race with a motorcycle (I think), so I give them the most innovation and inspiring award to tackle the challenge, whether it's necessarily to do so. But that's what the young and the restless is all about. My hat to them.

Technically, as an electrical & computing guy, I understand the concept to detect obstacle, range finding and terrain negotiation and these tools are readily available to put together but anyone know how a 4 wheels or a 2 wheels negotiate a ditch?

9/28/2005 04:38:00 PM  
Blogger ledger said...

Although the MSM, disparaged the first DARPA Grand Challenge I found the results interesting (last year the MSM only showed the vehicles going astray and maded sneering comments as each vehicle failed).

I note that there will be some qualifications at Fontana [California] Speedway (NASCAR has races there) and it's in my back yard. I may just check it out. Overall I think the Grand Challenge is a great idea.

Also, I think Nathan has a good handle on the players. One thing that I have always wondered - how is the course is kept secret until just prior to the event?

With the huge population in Southern California one would assume some player would catch wind of the actual course and possible program it into his/her vehicle (or design the vehicle exactly for the terrain). I say this because the vehicle must travel an average of 15 miles per hour over rough terrain to go the distance of 150 miles in 10 hours. If the terrain is severely rugged the larger "Monster Truck" type vehicles would have a greater chance of success (unless they have to cross the Sultan Sea, Colorado river, or the like, in a boat like fashion). Thus, there could be some incentive to peek at the course well before the event.

9/28/2005 05:08:00 PM  
Blogger Nathan said...


I'm not aware that Blue Team uses gyros on their robot, but I know that they use the Crossbow accelerometers. I vaguely remember a video from one of their very first tests where the robot is holding itself upright while stationary. This does seem to suggest the use of a gyro or group of gyros. In fact, I do know that the Richmond Field Station where Anthony's group does their work is also host to an autonomous toy helicopter group. Furthermore, some of the Blue Team has done work with this group as well. I would venture to guess that if they are using gyros, they would be using a set of the same palm-sized units used by the toy helicopter group. However, the only stability control system I ever heard about used the accelerometers and front wheel steering.


Thank you for the link, but I've spoken with both Todd Mendenhall and Anthony Levandowski in person and I know that both of them have spent more than the Wired article indicates. As I recall, in Anthony's case he has thus far contributed over $100,000 out of his own pocket. Similarly Todd's personal contribution has long surpassed $250,000, not including the $400,000 I believe was provided by Northrop Grumman this year alone. As for the Red Team, I am informed (perhaps incorrectly?) that the large spherical sensor atop their vehicle costs $250,000 by itself. The Red Team has also paid for exclusive geographical survey contracts (to the detriment of other teams), aerial and satellite photographs, and like most top-tier teams, subscribed to premium ground station augmented GPS services for sub-meter accuracy. These services are not cheap. To be absolutely fair, I believe that the $250,000 sensor is "on loan". Consider, however, that the Red Team managed to destroy one of these sensors last year when their vehicle overturned during the QID (they have roll bars around it now). Nevertheless, it is an asset with a dollar value that is being contributed to the Red Team effort. I personally don't believe for a second that their vehicle doesn't handily break the $1 million mark, but unfortunately I really don't have the same access to insider information with the Red Team that I have with some of the other teams.

Ditches are known as "negative obstacles" and there are sensor processing algorithms out there to try to identify them. A smart vehicle classifies both positive and negative obstacles into traversable and untraversable categories according to its own characteristics such as wheel size, wheelbase, and so on. These features are integrated into a map that tells the robot where it can actually go. This provides the set of possible solutions. Then the problem can be treated as an optimization with known constraints such as the direction of the nearest GPS waypoint, maximum or minimum speed, and so on. The best systems are capable of solving both the mapping problem and optimization problem in real time. In reality that just means that the robot cannot travel faster than a certain speed without overwhelming its sensors and processing capability. Really, the biggest problem with negative obstacles is obtaining enough accurate information about them from the sensors in order to make the right classification.

9/28/2005 05:47:00 PM  
Blogger exhelodrvr said...

One of the programmers I work with is on one of the teams (as a hobby); it will be it will be interesting to talk to him afterwards about the race.

9/28/2005 06:48:00 PM  
Blogger Nathan said...

How is the course is kept secret until just prior to the event?

The area through which the course runs is no secret. As I mentioned in the previous post, Carnegie Mellon shelled out for an exclusive contract with a geographical survey group for access to topographical maps, aerial photographs and other terrain data. I don't doubt that other teams have tried this as well. Due to environmental laws restricting travel in most of the area to certain roads, there is a finite number of corridors through which the route will most likely pass, and these corridors can be identified using the topographical and photographic data obtained from the survey services. However, this is ultimately not very useful. First, the Grand Challenge rules state that the course is traversable by most commercial 4x4 pickup trucks. Traversability is really not an issue for most vehicles. A bigger issue is GPS reception. If the route goes through a canyon or around a hill, robots will almost certainly lose GPS accuracy. The best way to deal with these conditions is to use inertial guidance. It is possible to navigate the robot using only inertial guidance and the previously obtained terrain information more or less the same way a submarine can navigate using charts, a compass and a stopwatch. However, this method really needs to be combined with an obstacle avoidance system that can ascertain whether or not the a priori knowledge is consistent with reality, and emphasize or de-emphasize the inertial navigation system accordingly.

One would assume some player would catch wind of the actual course and possible program it into his/her vehicle (or design the vehicle exactly for the terrain).

This is exactly what happened last year, but it was not because anyone correctly guessed the course itself. DARPA itself was to blame. When DARPA released the GPS waypoints just hours before the race, the teams discovered that the waypoints were set one meter apart. As a result virtually every single team elected to turn off all sensors other than GPS, essentially "preprogramming" the exact route into their vehicles.

It so happened that the first quarter of the route passed through hilly terrain with reduced GPS coverage, or underneath high voltage power lines, causing interference. This and the shutdown of all other sensors may be the biggest explanation for the early failures last year; despite the density of waypoints, GPS performance was unexpectedly inaccurate from the very beginning and the robots were literally driving blind.

DARPA has suggested this year that the waypoints will not be as dense as they were last year, and, furthermore, that unspecified types of tank obstacles will be deliberately placed in between adjacent waypoints. As a result, all vehicles are essentially required to accomplish GPS-IMS-OAS sensor fusion in order to have any chance at success.

9/28/2005 07:06:00 PM  
Blogger marymayhem said...

I would go with the team that uses the most off-the-shelf technology on a bet that what works is going to work. Team Jefferson looks interesting...

9/28/2005 07:08:00 PM  
Blogger Hanba'al said...


In pursue of my curiosity of ditch detection and negotiation, I found a PhD thesis proposal from Alex Foessel of CMU and this paper from
Cal Tech. The PhD thesis proposal is about milimeter waves radar technique while the Cal Tech is optical image processing technology. But reading more into the vehicles radar techniques, it seems they are all optical either laser or high resolution mono/stereo cameras. If DARPA wants to make life harder for these guys, they just lay a bunch of smoke screens and we will see some of them will arrive in Mexico or will not detect ditches in time and end up at the bottom of the lake :)

9/28/2005 07:28:00 PM  
Blogger ledger said...

Good info Nathan. That cleared-up most of my questions.

9/28/2005 07:53:00 PM  
Blogger dune runner said...

I think in the end the winner will be all of us, since the technoligies being refine won't take long to find their way into safer cars, trucks, etc. for all of us.

Personally though, I'll have to root for the Spirit vehicle. Gotta love those Axion Racing twins!

9/28/2005 08:19:00 PM  
Blogger Nathan said...


The better teams use combinations of radar, ladar, and camera systems for the OAS segment. Sensor fusion is in essence the calculation of the sum of information from each of these sensors, weighted by their resolutions, susceptibility to noise, and other potential fault factors. The result of this calculation is some number. This number can be applied to the previously mentioned map so that instead of a simple pass/nopass condition existing in any one direction, there is a number indicating the certainty with which the sensors have established that a particular route is passable. This generates a strong mathematical distinction between clearly impassable routes- the locations where all sensors agree that an obstacle exists- and passable routes, where more sensors agree that no obstacles are present. A properly integrated sensor suite is thus robust to the different kinds of failures and interferences that affect different kinds of sensors. While the ladar might be foiled by a smokescreen or a cloud of dust, the radar and the infrared cameras will not be fooled; while the radar may be less sensitive to nonmetal obstacles, the ladar and cameras should get good returns, and so on.

However, many of these systems are expensive individually, let alone assembled into a suite. Furthermore, robust stochastic sensor fusion is an appropriate thesis subject for PhD candidates. Most teams will probably try to get away with using only one or two- the lasers and cameras you mentioned. But I think that some of these teams will still get good results. I don't think the conditions will be so harsh that diversified sensor suites will really get the opportunity to prove their advantages.

9/28/2005 09:14:00 PM  
Blogger wretchard said...


If the idea of the degree of belief in the 'passability' of a particular direction can be expanded to include other terms, some of which would contain tactical information, you could create an array of vectors of N-length, representing moves, where N is how "far" you can see ahead. How is the expected value of the forseen moves brought into the reckoning?

9/28/2005 09:55:00 PM  
Blogger Hanba'al said...

Corrected links
Alex Foessel

Cal Tech Terrain Perception of DEMO III

The PhD proposal is dry but the presentation of the Cal Tech is digestable.

9/28/2005 10:28:00 PM  
Blogger Nathan said...


If I understand your question correctly, the example I would use for an "expected value" would be the direction towards the next GPS waypoint in the sequence. The elements of the array that are parallel or adjacent to this direction would probably be scaled in favor of passability by some (not very large) factor. Even with the scaling, if a plurality of sensors agree to sufficient confidence that an impassable obstacle exists, then the robot is forced to choose an alternate route but still continually seeks to reach the next waypoint.

9/28/2005 11:10:00 PM  
Blogger Tony said...


What you are talking about in terms of sensor fusion and ratings is a common technique in OCR (Optical Character Recognition). There are many techniques to achieve OCR accuracy, and each OCR engine combines many (hundreds) of approaches to recognition. Each engine tends to be a little better at particular classes of characters/documents. These days, with modern desktop processors, multiple engines run concurrently. Then a "voting" procedure compares all the results of each engine, where each engine has assigned a probability to the likelihood of a specific character/word's identity. The combined votes lead to the decision on the character that is recognized.

I would assume this same "voting" approach would be used in combining inputs from multiple sensors.

9/29/2005 06:20:00 AM  
Blogger Nathan said...


Yes, that sounds like very much the same idea. Cool!

9/29/2005 08:05:00 AM  
Blogger Slocum said...

Robotic machines will be good for broad patrol and warfare duties, minimizing casualties in dangerous locations. But sometimes you need someone on the ground to poke their head into the spider-hole and take a look.

Yes and no. Last time they ran this contest, none of the teams came even close to finishing. I see remote controlled robotic machines having far greater potential than autonomous robotic machines. Human vision and human judgement are capacities that machines simply can't come close to duplicating at this point. Using remote controlled machines also takes humans out of harm's way without taking human intelligence out of the equation.

9/29/2005 01:01:00 PM  
Blogger Nathan said...


Yes and no :)

Remote controlled machines have their own unique engineering problems, and there are two ways to solve them. The first way is to extend the telepresence of the operator by increasing the amount of information flow between operator and robot. The second way is to improve the autonomy of the robot so that a large amount of information flow is unnecessary.

The teleoperation problem becomes prohibitively difficult and increasingly vulnerable as the system is scaled upwards. Specifically, a large amount of information- for instance, images from an array of three to six cameras- must be transmitted to the operator fast enough for him to generate a reaction which is then transmitted back to the robot to carry out. Latency is improved by parallelizing the information flow; in other words, by increasing the bandwidth of the communication system. However, this makes the system more susceptible to jamming. A purely teleoperated robot can be "mission killed" without the enemy ever knowing that the robot was present by simply flipping on the jammer, denying the entire area within its effective range to the teleoperated robot. Jamming-resistant communications systems are uneconomical and pose a security risk for broad deployment with a machine of this scale. On the other hand, narrow bandwidths that are less susceptible to detection or interference tend to exacerbate the latency problem. Teleoperation may be adequate for simple robots travelling at slow speeds a relatively short distance from the operator. But when the speed of the vehicle is 20-40 kph, the operator is hundreds of kilometers away, communicating by satellite, and the vehicle needs 20-40 meters to come to a stop, the vehicle can only act on the belated commands of the operator reacting to circumstances that have long since changed.

The workaround to these engineering problems is to improve the autonomy of the vehicle. Some workaround! But there are several benefits that are gained by taking the human operator out of certain loops. First of all, reaction time can be improved. The robot can react (rightly or wrongly) much more quickly to a problem by itself, than by informing the operator that there is a problem, waiting for a response from the operator, and then acting upon the response. Second, the robot is no longer completely dependent on the integrity of a vulnerable and expensive communications system in order to conduct its mission. Third, the human operator can be dissassociated from the operation of individual vehicles. Perhaps he can operate several vehicles at once! This is a force multiplier, improving the capability wielded by a single soldier far beyond conventional capacity. Fourth, autonomy is scalable. Sensor capabilities and required processing power are proportional the the scale and speed of the vehicle as well as the desired or necessary degree of autonomy.

I believe it is necessary for people to understand robots in the context of scalable autonomy. The goal of the Grand Challenge is to solve the most basic step towards achieving vehicles with any kind of autonomy- simply getting from point A to point B. This does not need to require a human to dictate every little turn. UAV development has already surpassed this; many UAVs are capable of taking off, flying to a target, taking photographs, returning to base and landing all without any human control other than the initial mission plan. Ground robots simply face a more challenging environment than the relatively obstacle-free sky.

Save the human in the loop for the really difficult decisions- to shoot or not to shoot. Leave the driving to the machine.

9/29/2005 02:32:00 PM  
Blogger Slocum said...

UAV development has already surpassed this; many UAVs are capable of taking off, flying to a target, taking photographs, returning to base and landing all without any human control other than the initial mission plan. Ground robots simply face a more challenging environment than the relatively obstacle-free sky

Boy, you can say that again (the last part, that is). The challenge of flying a GPS determined route and returning to base is nothing compared to dealing with not only difficult terrain, but traffic of all kinds (other vehicles, pedestrians, stray animals, etc).

I understand all the points about the difficulty of fast, secure communication, but I still think that is a more solvable problem than machine intelligence at a level needed to operate in a complex ground environment. I just don't think it's ever going to be feasible in either the short or medium term (e.g. our lifetimes).

9/29/2005 03:19:00 PM  

Post a Comment

Links to this post:

Create a Link

<< Home

Powered by Blogger