DLAD - Deep Learning Autonomous Driving | rFpro

rFpro offers a comprehensive environment for the development, training, testing and validation of Deep Learning Autonomous Driving (DLAD) systems


HiDef Graphical Fidelity

When developing systems based on machine learning from sensor feeds, such as camera, LiDAR and radar feeds, the quality of the 3D environment model is very important. The greater the correlation between the real world and the virtual world the greater the correlation you will achieve from your algorithms between real and virtual scenarios.

Ideally you want to achieve complete correlation between your real and virtual testing. rFpro’s HiDef models are built around a graphics engine that includes a physically modeled atmosphere, weather and lighting, as well as physically modeled materials for every object in the scene.

100s of kilometers of public road models are available off-the-shelf, from rFpro, spanning North America, Asia and Europe, including multi-lane highways, urban, rural, mountain routes, all copied faithfully from the real world.

rFpro also allows the import of models from 3rd party maps, including IPG ROAD5, .max, .fbx, OpenFlight, Open Scene Graph, .obj. The rFpro SceneEditor then allows you to add or modify material properties to suit your experiments.


Sensor Model Feeds

rFpro supports simultaneous output to multiple sensor models, synchronised to 100µs to ensure coherent data. Over a 1GB LAN up to 80 simultaneous feeds per ego vehicle are possible. Each sensor may be fed multiple simultaneous streams, for example to support RGB, HDR, depth, point-cloud, object segmentation, road segmentation. In this way the data to train, test and validate sensor models and algorithms may be streamed simultaneously.

rFpro’s graphics engine is very efficient for ground-vehicle simulation, even allowing the streaming of extreme bandwidth high resolution HDR images in real-time, which is essential if running an ECU or Hardware-in-the-Loop, or with one or more human test drivers sharing the same virtual world.

rFpro can be used to simulate feeds to multiple sensor model types including camera, radar, LiDAR, Flash LiDAR, GPS, DGPS, infrastructure sensors and mapping interfaces. rFpro also supports off-the-shelf sensor models, such as IPG’s CarMaker Physical Sensor Models module, sharing the IPG Road5 road network.

Sensor feeds may be labelled and can include 1st and 2nd order derivatives for moving objects, e.g. pedestrians and traffic. This is particularly useful for the validation of emergency prediction algorithms.

Sensors may be placed and oriented accurately anywhere on the test vehicles with complete control over the sensor feeds’ size, resolution, field of view etc.



Training, testing, validating Deep Learning Autonomous Driving


Integrating your Models and Algorithms

rFpro makes it easy to interface your models to the virtual world. For example, at a simple level your algorithms and models may pass in basic driver controls, e.g. steer angle, brake or throttle commands.

As testing develops and you may want to simulate the dynamics behaviour of the vehicle. For example, you would like to simulate the effect of vehicle motion over bumpy roads, or during rapid manoeuvres, on the sensor feeds. Or, you might want a more accurate vehicle dynamics model so that the car behaves correctly in an emergency manoeuvre, possibly interacting with the real ABS and Stability Control systems. rFpro includes interfaces to all the mainstream vehicle modelling tools including CarMaker, CarSim, Dymola, SIMPACK, dSPACE ASM, AVL VSM, Siemens Virtual lab Motion, DYNAware, Simulink, C++ etc. rFpro also allows you to use industry standard tools such as MATLAB Simulink to modify and customise experiments.


Traffic, Pedestrians and Manoeuvres

Your experiments and training sets in rFpro can make use of manoeuvres comprising recordings from previous simulation runs, semi-intelligent Swarm traffic that populates the virtual world with vehicles and pedestrians following the rules of the road, and with programmed traffic to provoke a particular response or emergency behaviour.

rFpro’s Open Traffic interface allows the use of Swarm traffic from tools such as the open-source SUMO, PTV VisSim, and VTD. Vehicular and pedestrian traffic can share the road network correctly with perfectly synchronised traffic and pedestrian signals, while allowing ad-hoc behaviour, such as pedestrians stepping into the road.

rFpro passes the ego vehicle(s) under the control of your experiment to the Swarm Traffic, so that they avoid, and give way to your vehicle at junctions, according to the rules of the road network.

Programmed Traffic may be injected into experiments by your models, by tools like IPG Traffic, by your vehicle plant-models and by your own custom code (e.g. Simulink) making it easy to recreate real-world scenarios triggered by an event.


Human Drivers

The virtual world in rFpro may be populated by ego vehicles, the vehicles controlled by your models, as well as by Swarm traffic and Programmed traffic.

You may also add vehicles under human control into the virtual world. Your test drivers can be in full-scale simulators or at desktop Workstations with basic steering and pedal controls. This allows human drivers to test cars with Deep learning ADAS systems, to be passengers in a car under the control of a fully autonomous DLAD system as well as to simply drive around the virtual world to either subjectively evaluate the behaviour of autonomous vehicles, or to provoke behaviour or emergency scenarios.



rFpro scales from a single instance to multiple instances running offline unattended, allowing you to extend the use of rFpro to regression testing of your models. Your experiments have control over environmental properties including but not limited to time of day, day of the year, weather, classes and colours of vehicles used, Manoeuvres, Traffic, road side objects and scenery – allowing you to set up and manage complex, deterministic, repeatable scenarios.

Not what you were looking for?