TechCrunch: autopilot learns to predict pedestrian actions

TechCrunch: autopilot learns to predict pedestrian actions

February 17, 2019 0 By autotimesnews

The University of Michigan, known for its development in autonomous vehicles, is working on an improved algorithm for predicting the movement of pedestrians, which takes into account not only what they do, but also how they do it, according to TechCrunch. This body language can be crucial for the prediction of further human actions.

Pedestrian monitoring and prediction of their actions is an important part of the visual control data processing system of any autonomous vehicle. Understanding where a person is present is very important for a car to work – while some companies claim that people can see their cars in certain conditions, none of them says that their vehicles recognize gestures and posture.

Such visual control data processing algorithms can (although currently unlikely) be as simple as identifying a person and counting how many pixels are moved in several frames. But, naturally, the human movement is a little more complicated.

The new UM system uses lidars and stereoscopic cameras to evaluate not only the trajectory of a person’s movement, but also his posture and gait. The pose can indicate whether the person is looking towards the car, using the cane, leaning towards the phone; gait indicates not only speed, but also intention.

Is someone looking over his shoulder? Maybe he will turn or begin to run across the cork. Reaching out? Maybe it signals someone (or perhaps a car) to stop. These additional data help the system predict traffic and provide a more complete set of navigation plans and contingencies.

It is important to note that the system works well with only a few frames – one step and a sweep of the arm, for example. This is enough to make a prediction that easily surpasses simpler models, which is a critical performance indicator, since it cannot be assumed that a pedestrian will be visible for more than a few frames between obstacles.

Now, with this little-studied data, not so much can be done, but their adoption and cataloging is the first step towards making them an integral part of the autonomous vehicle vision system.