Output
Like for all other extractors modules, results are placed in the ALMemory. You can open the web page of your robot with your favorite browser, go to
> and look for PictureDetected in the search field.When something is recognized, you will see an ALValue (a series of fields in brackets) organized as explained here:
- If nothing is recognized, the variable is empty. More precisely, it is an array with zero element. (ie, printed as [ ] in python)
- If things are recognized, then the variable structure consists of the following fields:
[ [TimeStampField] [Picture_info_0 , Picture _info_1, . . . , Picture_info_N] ] with as many Picture_info tags as objects currently recognized
with:
- TimeStampField = [ TimeStamp_seconds, Timestamp_microseconds ]. This field is the timestamp of the frame used to perform recognition
- Picture_info = [ [labels_list], matched_keypoints, ratio, [boundary_points] ]
>
= [label_0, label_1, ..., label_N] where label_n belongs to label_n+1 (e.g. "page 9" belongs to "my book")>
corresponds to the number of keypoints retrieved for the object the current frame>
represents the number of keypoints found for the picture in the current image divided by the number of keypoints obtained during the learning stage;>
= [ [x0,y0], [x1,y1], ..., [xN,yN] ] is a list of points expressed in angle coordinates (to be independent of the camera resolution) representing the reprojection in the current image of the boundaries selected during the learning stage for the object.