The prject proposes an implementation of architecture to fuse geometric features computed from point clouds and Convolution Neural Network (CNN) classifications, based on images.
Architecture fuses geometric features computed from N-perception modules. This implementation uses two perception modules, point clouds (PITT) and Convolution Neural Network (CNN - TensorFlow). Between a Perception module and Feature selector module, there is an Adapter to implement a standard message for each perception module. Then, Feature Selector receives all the information acquired by the sensors and produces two outputs. An intersection of data that identifies any common features between the objects from different perception modules and an union data containing all information from the sensors. Correlation Table Manager takes union data and ........ Reasoner takes in input the output of Correlation Table Manager and generates an index of correlation for objects recognized from different perception modules .......... Finally the Feature Selector join the data from Reasoner and Feature Selector module by searching for objects' IDs and assorting all features coming from different perceptive modules. Then, it returns an output message for each object recognized comprehensive with all information collected by the various sensors.
string types
string name
string[] value
feature[] obj
uint32 id_mod
obj[] adap
adapter[] common
commonFeature[] matcher
string first_percepted_object
string second_percepted_object
float32 correlation
corr[] table
string[] rec
float32 corr
record[] lines
obj[] sameObj
float32 correlation
It describes all the modules within the architecture, i.e, (i) the inputs, (ii) the internal working, and (iii) the outputs.
- Input:
- Output:
- Publisher: [O1]
It describes all the modules within the architecture, i.e, (i) the inputs, (ii) the internal working, and (iii) the outputs.
- Input:
- Output:
- Publisher: [O2]
It's a module between a perception module and Feature selector module and makes a standard message for each perception module. We provide an adapter for the Pitt module and another for the Tensorflow module. To add another perception module, it needs to implement a different adapter.
- Input: a type of message of a perception module
- Output: an adapter.msg
- Publisher: /outputAdapterPitt | /outputAdapterTensor
It receives all the information acquired by the sensors, storing them inside a buffer. It produces two outputs: the first one transmits the information saved in the buffer to the "Feature Matcher" and the other output transmits to the "Correlation Table Manager" module. For the second output, the module identifies any common features between the objects recognized by the sensors (for example, shape or color).
- Input: an adapter.msg
- Output: a selectorMatcher.msg
- Publisher: /featureScheduler/pubIntersection [F]| /featureScheduler/pubUnion [R]
In this module, the data received from the Reasoner and the Feature Selector are joined. From the Reasoner, this module receives the objects' IDs detected by different perception modules plus the degree of correlation (a number that indicates the reliability of the analysis obtained). From Feature Selector, Feature Matcher receives all data of objects detected. The Features Matcher finds the information by searching for objects' IDs and assorting all features coming from different perceptive modules. Then, it returns an output message for each object recognized comprehensive with all information collected by the various sensors.
- Input: a selectorMatcher.msg | outputReasoner.msg
- Output: a matcherObj.msg
- Publisher: /featureMatcher/dataPub [P]
It describes all the modules within the architecture, i.e, (i) the inputs, (ii) the internal working, and (iii) the outputs.
- Input:
- Output:
- Publisher: [T]
It describes all the modules within the architecture, i.e, (i) the inputs, (ii) the internal working, and (iii) the outputs.
- Input:
- Output:
- Publisher: [U]
On the machine must be installed RoS with rospy and python.
cd catkin_ws/src/
git clone https://github.com/maicivan/sofar_multimodal.git
cd sofar_multimodal
roscore &
source devel/setupe.bash
rosrun sofar_multimodal talkerPitt &
rosrun sofar_multimodal talkerTensor &
rosrun sofar_multimodal adapterPitt &
rosrun sofar_multimodal adapterTensor &
rosrun sofar_multimodal featureScheduler.py &
rosrun sofar_multimodal tableMatcher.py &
rosrun sofar_multimodal reasonerMain.py &
rosrun sofar_multimodal featuresMatcher.py &
To monitor the output:
rostpic echo /featureMatcher/dataPub
It presents the result using (images or videos) of the working system, in (real or simulation).
To add a perception module, you will need to add an Adapter module between perception module and feature selector module. Then, add a subscriber and its callback in featureScheduler.py by following the commented example into the script.
In docs directory there is the doxygen documentation in html or latex format.
-
Filippo Lapide
-
Vittoriofranco Vagge
-
Maicol Polvere
-
Daniele Torrigino
-
Francesco Giovinazzo
-
Nicolò Baldassarre
-
Andrea Rusconi
-
Matteo Panzera
-
Francesco Bruno
-
Ariel Gjaci