Hik schreef op 16 juni 2017 12:47:
[...]
Ik zag het wel toen ik op zoek was naar het BoschRRS patent eerder deze week. Helaas kan ik die plaatjes niet groter maken en zie ik weinig details.
Dit patent lijkt me een uitbreiding op RoadDNA met als aanvulling:
- inwinning van data via camera's (met name road-surface dat ook als extra bron met Lidar wordt gescand)
- combinatie met radio bronnen, niet zijnde Lidar. Dan denk ik dus aan Radar.
Het zou kunnen betekenen dat dit een voorstel is tot het completeren van RoadDNA niet alleen op basis van Lidar (en camera), maar met een Radar (BoschRRS) toevoeging.
Eigenlijk volgt het patent het statement van TomTom dat Bosch RRS een aanvulling zou (kunnen) zijn op RoadDNA.
Hierin wordt dat niet alleen beschreven, sterker nog, ze halen ook maar gelijk Camera erbij (Autonomos?).
----------------------------
Figure 24 depicts an exemplary system in accordance with embodiments of the invention in which data collected by one or more vehicle sensors:
laser; camera; and radar, is used to generate an "actual footprint" of the environment as seen by the vehicle. The "actual footprint" is compared, i.e. correlated, to a corresponding "reference footprint" that is determined from reference data associated with a digital map, wherein the reference data includes at least a distance channel, and may include a laser reflectivity channel and/or a radar reflectivity channel, as is discussed above. Through this correlation, the position of the vehicle can be accurately determined relative to the digital map.
In a first example use case, as depicted in Figure 25A, an actual footprint is determined from a laser-based range sensor, e.g. LIDAR sensor, in the vehicle and correlated to a reference footprint determined from data in the distance channel of the reference data, so as to achieve continuous positioning of the vehicle. A first approach is shown in Figure 25B in which the laser point cloud as determined by the laser-based range sensor is converted into a depth map of the same format as the reference data, and the two depth map images are compared. A second, alternative approach is shown in Figure 25C in which a laser point cloud is reconstructed from the reference data, and this reconstructed point cloud compared to the laser point cloud as seen by the vehicle.
In a second example use case, as depicted in Figure 26A, an actual footprint is determined from a camera in the vehicle and correlated to a reference footprint determined from data in the distance channel of the reference data, so as to achieve continuous positioning of the vehicle, although only during the day. In other words, in this example use case a reference depth map is used to construct a 3D point cloud or view that is then compared to a 3D scene or view obtained from multiple vehicle cameras or a single vehicle camera. A first approach is shown in Figure 26B in which stereo vehicle cameras are used to build a disparity based 3D model, which is then used to construct a 3D point cloud for correlation with the 3D point cloud constructed from the reference depth map. A second approach is shown in Figure 26C in which a sequence of vehicle camera images is used to construct a 3D scene, which is then used to construct a 3D point cloud for correlation with the 3D point cloud constructed from the reference depth map. Finally, a third approach is shown in Figure 25D in which a vehicle camera image is compared with a view created from the 3D point cloud constructed from the reference depth map.
In a third example use case, as depicted in Figure 27A, is a modification to the second example use case wherein laser reflectivity data of the reference data, which is in a channel of the depth map, can be used to construct a 3D point cloud or view that may be compared to a 3D point cloud or view based on images captured by one or more cameras. A first approach is shown in Figure 27B, wherein a sequence of vehicle camera images is used to construct a 3D scene, which is then used to construct a 3D point cloud for correlation with the 3D point cloud constructed from the reference depth map (using both the distance and laser reflectivity channels). A second approach is shown in Figure 27C in which a vehicle camera image is compared with a view created from the 3D point cloud constructed from the reference depth map (again using both the distance and laser reflectivity channels).
In a fourth example use case, as depicted in Figure 28A, an actual footprint is determined from a radar-based range sensor in the vehicle and correlated to a reference footprint determined from data in the distance and radar reflectivity channels of the reference data, so as to achieve sparse positioning of the vehicle. A first approach is shown in Figure 28B, wherein reference data is used to reconstruct a 3D scene and data in the radar reflectivity channel is used to leave only the radar-reflective points. This 3D scene is then correlated with the radar point cloud as seen by the car.
It will of course be understood that
the various use cases could be used together, i.e. fused, to allow
for a more precise localisation of the vehicle relative to the digital map.