ADAS & Autonomous | rFpro

Safer, faster testing of ADAS and Autonomous systems

rFpro’s software is used to train, test and validate supervised learning autonomous driving models and ADAS systems.

AI learns through Training Data. Simulated training data sets allow AI to experience life threatening situations without risk to real road users.

Simulated testing allows rFpro users to push their AI to the limit, even in situations where the experiments are life threatening for passengers and other road users.  rFpro experiments can scale across muliple simulations sharing the same virtual world, so tests can include real human road users too.

 

rFpro’s complete End-to-End, Physically Modelled, simulation means rFpro can be used for training AI, regression testing and identifying new failure modes, so rFpro customers’ simulation will be able to form part of future regulatory frameworks.

Full End-to-End testing also means rFpro users can involve their target customers – the passengers – in simulated testing.  Involving passengers at the earliest possible stage helps ensure customer acceptance, protecting your investment in AI and autonomy.

 

how does rfpro deliver AUTONOMY

rFpro’s three steps to simulated testing:

1. physically model the real world

The 1st step means simulation according to the laws of Physics, abandoning the computationally efficient special effects that are adequate for games and cinematography, because we haven’t just got to convince humans, we’ve got to convince machine vision systems. Objects in rFpro’s Digital-Twins are made from materials that obey the laws of physics. When light strikes the paint on cars, the white lines on the road, the road surface, it must behave in the same way as the real world.

The atmosphere is also physically modelled, allowing experiments to run at different times of day, in different weather conditions. Again, because rFpro’s digital twins are physically modelled, when the rain is persistent, puddles will accumulate, in the correct places, and you may start to see reflections in the road surface.

By creating a physically modelled world, it means our simulation can correlate with the real world. If you want to be able to validate your simulation against the real world then you need to be physically modelling the real world. This will become essential as simulation starts to form part of the regulatory frameworks affecting AVs.

 

2. physically model your vehicle’s interaction with the real world

The 2nd step is to Physically model your vehicles interaction with that world, through sensors and vehicle dynamics.

Underneath rFpro’s graphics is a physically modelled road surface. A 1cm grid spanning the drivable surface, accurate to 1mm, which means, for example, that AI can learn to drive the path humans would, avoiding the bumps and potholes that human drivers instinctively miss:  Essential if OEMs want AV customers in Detroit or the UK where road quality is poor.

Some of our customers are more interested in using this vehicle-dynamics capability to ensure they can engineer their core DNA into their AI – whether that be ‘sporty’, ‘comfort’ or ‘eco-drive’.

rFpro also models the vehicles’ interaction with the real world through its sensors, the cameras and LiDAR that allow the AI to “see”. rFpro’s Physically modelled simulation ensures conservation of energy and solves the beam divergence and sensor motion problems.

3. scale your testing massively

When most people think about parallelisation of experiments they imagine a list of 100 tasks that take a day to complete on 1 computer. Buy ten computers and you can complete the list in an hour. But AVs are complex, you can’t simulate an autonomous vehicle, with reasonable correlation to the real world on one computer. So rFpro can also scale each experiment across multiple CPUs, and multiple GPUs, to match the complexity of the real vehicle under test, with its multiple cameras and LiDAR sensors.

Over the next 5 years, users will end up with large numbers of experiments, possibly tens, or hundred of thousands of experiments, each with multiple variations, because every failure mode a user discovers results in a few more tests being added to their library.

4. scale the complexity of your tests

Real autonomous vehicles are highly complex, performing sensor fusion across multiple cameras, LiDAR and other sensors.  rFpro scales your individual experiments across multiple CPUs and GPUs in order to achieve an adequate level of simulated correlation with your real testing.

5. add real human drivers, in simulation, to your experiments

rFpro experiments and tests can span multiple simulators. This means that a team of human test drivers can join an AV or ADAS simulation – a proven way to detect CAV failure modes.  Humans are random and unpredictable and therefore valuable for both training and testing ADAS and AV systems.  rFpro allows humans to join experiments without risk of injury or death.

Up to 50 simulations may participate in, and simultaneously share, the same virtual world.  Your AI can be pushed to the limit, sharing roads packed with real human drivers, in a completely safe environment.

Simulating the real world, today, with human drivers sharing the virtual world with your AVs allows you to test years before your AVs actually meet humans in the real world.

For example, the following video, where some of the traffic on the highway is human, shows an AV trying to merge onto a highway. It demonstrates how important it is to disguise your AVs to look exactly the same as conventional vehicles.

Human drivers quickly learn that when a vehicle trying to merge is an AV you can just ignore it because it’s not going to risk forcing it’s way into your lane.

Learning this in simulation costs a fraction of a percent of the cost of building, maintaining and managing a fleet of real test vehicles. Simulation allows you to learn more, sooner and, critically, allows you to push your AI hard in complex scenarios where there is a risk to human road users.

Not what you were looking for?