Supervised Learning Autonomous Driving | rFpro

rFpro offers a comprehensive environment for the development, training, testing and validation of supervised Learning Autonomous Driving systems

 

ultra-HiDef Graphical Fidelity

When developing systems based on machine learning from sensor feeds, such as camera, LiDAR and radar feeds, the quality of the 3D environment model is very important. The greater the correlation between the real world and the virtual world the greater the correlation you will achieve from your algorithms between real and virtual scenarios.

Ideally you want to achieve complete correlation between your real and virtual testing. rFpro’s HiDef models are built around a graphics engine that includes a physically modelled atmosphere, weather and lighting, as well as physically modelled materials for every object in the scene.

Hundreds of kilometres of public road models are available off-the-shelf, from rFpro, spanning North America, Asia and Europe, including multi-lane highways, urban, rural and mountain routes, all copied faithfully from the real world.

rFpro also allows the import of models from 3rd party maps, including IPG ROAD5, .max, .fbx, OpenFlight, Open Scene Graph, .obj. The rFpro SceneEditor enables you to add or modify material properties to suit your experiments.

 

Sensor Model Feeds

rFpro supports simultaneous output to multiple sensor models, synchronised perfectly in simulation, and to within 100µs in real-time, to ensure coherent data. Over a 1GB LAN up to 80 simultaneous feeds per ego vehicle are possible. Each sensor may be fed multiple simultaneous streams, for example to support RGB, HDR, depth, point-cloud, optical flow, object segmentation and road segmentation. In this way the data to train, test and validate sensor models and algorithms may be streamed simultaneously.

rFpro’s graphics engine is very efficient for ground-vehicle simulation, even allowing the streaming of extremely high bandwidth, high resolution, HDR images in real-time.  This is essential if running an ECU or Hardware-in-the-Loop, or with one or more human test drivers sharing the same virtual world.

rFpro can be used to simulate feeds to multiple sensor model types including camera, radar, LiDAR, Flash LiDAR, GPS, DGPS, infrastructure sensors and mapping interfaces. rFpro also supports off-the-shelf sensor models, such as IPG’s CarMaker Physical Sensor Models module, sharing the IPG Road5 road network and Claytex’ library of physically modelled sensor models.

Sensor feeds may be labelled and can include 1st and 2nd order derivatives for moving objects, e.g. pedestrians and traffic. This is particularly useful for the validation of emergency prediction algorithms.

Virtual sensors may be placed and oriented accurately anywhere on the test vehicles with complete control over the sensor feeds’ size, resolution, field of view etc.

Here is an example of Claytex’ physically modelled LiDAR sensor model (in this case a Velodyne VLP-16) running inside rFpro.  The simulation also demonstrates the use of a customer’s IPG CarMaker vehicle model and IPG Traffic controlling the experiment:

This simulation demonstrates some of the benefits of running inside rFpro’s physically modelled virtual world:  conservation of energy, solving the beam divergence problem and solving the sensor motion problem – not only do the LiDAR sensors rotate in real-time, but the views from the virtual sensors are affected by the vehicle driving across bumps and repairs in the road surface and by the car’s braking and cornering.

 

 

multiple variations of every experiment different weather, time of day . . .

 

 

 

Training, testing, validating supervised Learning Autonomous Driving

 

Integrating your Models and Algorithms

rFpro makes it easy to interface your models to the virtual world. For example, at a simple level your algorithms and models may pass in basic driver controls, e.g. steer angle, brake or throttle commands.

As testing develops you can simulate the dynamic behaviour of the vehicle on the sensor feeds. For example, to simulate the effect of vehicle motion over bumpy roads, or during rapid manoeuvres.  Or, you might want a more accurate vehicle dynamics model so that the car behaves correctly in an emergency manoeuvre, possibly interacting with the real ABS and Stability Control systems.  rFpro includes interfaces to all the mainstream vehicle modelling tools including CarMaker, CarSim, Dymola, SIMPACK, dSPACE ASM, AVL VSM, Siemens Virtual lab Motion, DYNAware, Simulink, C++ etc.  rFpro also allows you to use industry standard tools such as MATLAB Simulink and Python to modify and customise experiments.

 

Traffic, Pedestrians and Manoeuvres

It is possible to populate the world with programmed traffic and semi-intelligent Swarm traffic, vehicles and pedestrians following the rules of the road, to provoke a particular response or emergency behaviour. It is also possible to use manoeuvres recorded from previous simulation runs.

rFpro’s open Traffic interface allows the use of Swarm traffic and Programmed Traffic from tools such as the open-source SUMO, IPG Traffic, dSPACE ASM traffic, PTV VisSim, and VTD. Vehicular and pedestrian traffic can share the road network correctly with perfectly synchronised traffic and pedestrian signals, while allowing ad-hoc behaviour, such as pedestrians stepping into the road.

rFpro passes the ego vehicle(s) under the control of your experiment to the Traffic system, so that they avoid, and give way to your vehicle at junctions, according to the rules of the road network.

Programmed Traffic may also be injected into experiments by your own models or custom code (e.g. Simulink), making it easy to recreate real-world scenarios triggered by an event.

 

 

Human Drivers

The virtual world in rFpro may be populated by ‘ego vehicles’, the vehicles controlled by your models, as well as by Swarm traffic and Programmed traffic.
You may also add vehicles under human control into the virtual world. Your test drivers can be in full-scale simulators or at desktop workstations with basic steering and pedal controls. This allows human drivers to test drive vehicles with ADAS systems, to be passengers in a car under the control of a fully autonomous system, as well as to simply drive around the virtual world to either subjectively evaluate the behaviour of autonomous vehicles, or to provoke behaviour or emergency scenarios.

This is really important for autonomous developers:

Firstly, it means that rFpro can be used on a simulator to allow the subjective evaluation of riding as a passenger under the control of your autonomous systems, even replicating the driving dynamics – so you can evaluate how comfortable, how safe, how anxious the passenger feels. rFpro will help you achieving consumer acceptance as well as safety.

Secondly, because it means you can add human drivers into autonomous vehicle simulations. Your simulated AV can share the same virtual world as one, or more, human test drivers. At the moment up to fifty human drivers may join an experiment. There is huge value in throwing human drivers at autonomous tests. Humans are unpredictable, we are slightly random, we are never exactly the same twice, we make mistakes. We now know of no better way to provoke new failure modes in autonomous vehicles than by throwing human drivers at simulation.

The best thing is that nobody dies in simulation. You can push your AI to the limit, testing it in complex situations, with real human road users, entirely in simulation, and nobody is hurt when it goes wrong.

 

 

Scalability

The benefits of scaling multiple experiments across a data centre are clear. Uniquely, rFpro goes one step further:
CAVs are complex vehicles, encompassing multiple sensors. rFpro scales experiments across multiple CPUs and GPUs to match the complexity of the real vehicle being simulated.

When most people think about parallelisation of experiments they imagine a list of 100 tasks that take a day to complete on 1 computer. Buy ten computers and you can complete the list in an hour. But AVs are complex, you can’t simulate an autonomous vehicle, with reasonable correlation to the real world on one computer. So rFpro can also scale each experiment across multiple CPUs, and multiple GPUs, to match the complexity of the real vehicle under test. This means customers may simulate their autonomous vehicles, with correlation to the real-world, even when performing sensor-fusion across tens of virtual sensor models on each car.

Over the next 5-10 years, users will end up with large numbers of experiments, possibly tens, or hundred of thousands, each with multiple variations, because every failure mode a user discovers results in a few more tests being added to their library.

Not what you were looking for?