You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* [WIP] FoodOnForkDetector class, dummy instantiation, and node are in place but untested
* Tested dummy food on fork detector
* Deleted mistakenly added fork handle masks
* [WIP] added and tested train-test script
* Added PointCloudTTestDetector and tested on offline data
* Retrained with new dataset
* Fix config file
* Fix imports
* Re-add wrongly removed masks
* Retrained with new rosbag
* Added filters, retrained with only the new rosbag
* Have a great, working detector
* Formatting, cleaning up code
* Updated README
* Added launchfile
* Moved to forkTip frame and changed distance aggregator to 90th percentile
* Fixes from in-person testing
* Overlaid stored noFof points on camera image
---------
Co-authored-by: Ethan K. Gordon <[email protected]>
Copy file name to clipboardExpand all lines: ada_feeding_perception/README.md
+32Lines changed: 32 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -91,3 +91,35 @@ Launch the web app along with all the other nodes (real or dummy) as documented
91
91
-`offline.images` (list of strings, required): The paths, relative to `install/ada_feeding_perception/share/ada_feeding_perception`, to the images to test.
92
92
-`offline.point_xs` (list of ints, required): The x-coordinates of the seed points. Must be the same length as `offline.images`.
93
93
-`offline.point_ys` (list of ints, required): The y-coordinates of the seed points. Must be the same length as `offline.images`.
94
+
95
+
## Food-on-Fork Detection
96
+
97
+
Our eye-in-hand Food-on-Fork Detection node and training/testing infrastructure was designed to make it easy to substitute and compare other food-on-fork detectors. Below are instructions on how to do so.
98
+
99
+
1.**Developing a new food-on-fork detector**: Create a subclass of `FoodOnForkDetector` that implements all of the abstractmethods. Note that as of now, a model does not have access to a real-time TF Buffer during test time; hence, **all transforms that the model relies on must be static**.
100
+
2.**Gather the dataset**: Because this node uses the eye-in-hand camera, it is sensitive to the relative pose between the camera and the fork. If you are using PRL's robot, [the dataset collected in early 2024](https://drive.google.com/drive/folders/1hNciBOmuHKd67Pw6oAvj_iN_rY1M8ZV0?usp=drive_link) may be sufficient. Otherwise, you should collect your own dataset:
101
+
1. The dataset should consist of a series of ROS2 bags, each recording the following: (a) the aligned depth to color image topic; (b) the color image topic; (c) the camera info topic (we assume it is the same for both); and (d) the TF topic(s).
102
+
2. We recorded three types of bags: (a) bags where the robot was going through the motions of feeding without food on the fork and without the fork nearing a person or plate; (b) the same as above but with food on the fork; and (c) bags where the robot was acquiring and feeding a bite to someone. We used the first two types of bags for training, and the third type of bag for evaluation.
103
+
3. All ROS2 bags should be in the same directory, with a file `bags_metadata.csv` at the top-level of that directory.
104
+
4.`bags_metadata.csv` contains the following columns: `rosbag_name` (str), `time_from_start` (float), `food_on_fork` (0/1), `arm_moving` (0/1). The file only needs rows for timestamps when one or both of the latter columns change; for intermediate timestamps, it is assumed that they stay the same.
105
+
5. To generate `bags_metadata.csv`, we recommend launching RVIZ, adding your depth and/or RGB image topic, and playing back the bag. e.g.,
106
+
1.`ros2 run rviz2 rviz2 --ros-args -p use_sim_time:=true`
107
+
2.`ros2 bag play 2024_03_01_two_bites_3 --clock`
108
+
3. Pause and play the rosbag script when food foes on/off the fork, and when the arm starts/stops moving, and populate `bags_metadata.csv` accordingly (elapsed time since bag start should be visible at the bottom of RVIZ2).
109
+
3.**Train/test the model on offline data**: We provide a flexible Python script, `food_on_fork_train_test.py`, to train, test, and/or compare one-or-more food-on-fork models. To use it, first ensure you have built and sourced your workspace, and you are in the directory that contains the script (e.g., `cd ~/colcon_ws/src/ada_feeding/ada_feeding_perception/ada_feeding_perception`). To enable flexible use, the script has **many** command-line arguments; we recommend you read their descriptions with `python3 food_on_fork_train_test.py -h`. For reference, we include the command we used to train our model below:
Note that we trained our model on data where the fork either had or didn't have food the whole time, and didn't near any objects (e.g., the plate or the user's mouth). (Also, note that not all the above ROS2 bags are necessary; we've trained accurate detectors with half of them.) We then did an offline evaluation of the model on bags of actual feeding data:
4. **Test the model on online data**: First, copy the parameters you used when training your model, as well as the filename of the saved model, to `config/food_on_fork_detection.yaml`. Re-build and source your workspace.
118
+
1. **Live Robot**:
119
+
1. Launch the robot as usual; the `ada_feeding_perception`launchfile will launch food-on-fork detection.
120
+
2. Toggle food-on-fork detection on: `ros2 service call /toggle_food_on_fork_detection std_srvs/srv/SetBool "{data: true}"`
121
+
3. Echo the output of food-on-fork detection: `ros2 topic echo /food_on_fork_detection`
0 commit comments