Rozdziały

Stały URI dla kolekcjihttp://hdl.handle.net/11652/4775

Przeglądaj

collection.search.results.head

Teraz wyświetlane 1 - 2 z 2
  • Pozycja
    On the Importance of the RGB-D Sensor Model in the CNN-based Robotic Perception
    (Wydawnictwo Politechniki Łódzkiej, 2023) Zieliński, Mikołaj; Belter, Dominik
    Mobile and manipulation robots operating indoors use RGB-D cameras as the environment perception sensors. To process data from RGB and depth cameras neural networks are applied. These neural-based systems are trained using synthetic datasets due to the difficulties of obtaining ground truth data on real robots. As a result, the neural model used on the real robot does not produce satisfactory performance due to the differences between the images used during training and the inference. In this paper, we show the importance of depth sensor modeling while training the neural network on a synthetic dataset. We show that the obtained neural model can be used on the real robot and process the data from the real RGB-D camera.
  • Pozycja
    3D Reconstruction of Non-Visible Surfaces of Objects from a Single Depth View – Comparative Study
    (Wydawnictwo Politechniki Łódzkiej, 2023) Staszak, Rafał; Michałek, Piotr; Chudziński, Jakub; Kopicki, Marek; Belter, Dominik
    Scene and object reconstruction is an important problem in robotics, in particular in planning collision-free trajectories or in object manipulation. This paper compares two strategies for the reconstruction of nonvisible parts of the object surface from a single RGB-D camera view. The first method, named DeepSDF predicts the Signed Distance Transform to the object surface for a given point in 3D space. The second method, named MirrorNet reconstructs the occluded objects’ parts by generating images from the other side of the observed object. Experiments performed with objects from the ShapeNet dataset, show that the view-dependent MirrorNet is faster and has smaller reconstruction errors in most categories.