Our solution to the robothon 2023 challenge is made fully
open-source to allow the community to build on top and further
improve our solution. All components are stored
Platonics Delft.
You can find all required software components and links to their
installation guides below. The first repo is the controller in Cpp and the second one is the python manipulation package. You can follow the instructions for the installation in the README.md of the two repositories.
- Franka Cartesian Impedence Controller
- Robothon Manipulation
First of all we need to start the controller with
roslaunch franka_robothon_friendly_controllers cartesian_variable_impedance_controller.launch robot_ip:=ROBOT_IP
.
Then we start the camera with
roslaunch box_localization camera.launch
.
Then we record a template with
python3 manipulation_tasks_panda/trajectory_manager/record_template.py "name_template"
.
You can now record a demonstration with:
python3 manipulation_tasks_panda/trajectory_manager/recording_trajectory.py "name skill"
.
During demonstration the robot is recording the Cartesian Pose of the robot and the current image from the camera for every time steps (the control freqeuncy is set to 20 Hz).
During execution, the robot track the recorded trajectory while trying to mismatch the discrepancy between the current image at that timestep and the one recorded during demonstration.
This will allow to increase reliability of critical picking and insertion tasks.
Execute Learned Skill
For executing the skill you can run
python3 manipulation_tasks_panda/trajectory_manager/playback_trajectory.py "name skill"
.
To run all the skill in a row, you can modify the manipulation_tasks_panda/trajectory_manager/playall.py according to the name of your skills. You can run
python3 manipulation_tasks_panda/trajectory_manager/playall.py 0
if you don't want to have the active localizer or
python3 manipulation_tasks_panda/trajectory_manager/playall.py 1
, otherwise.
The second command requires that you also launch the localization service with
roslaunch box_localization box_localization.launch template:='name_template'
In this second case, the robot will first localize the object to match the template given during demonstration, trasform the skill in the new reference frame and then execute it.
During demonstration or during execution, it is possible to give feedback to the learned skill using the computer keybord.
The motivation of explictly labeling or disenabling the haptics and local camera feedback is because during a long trajectory the user can explictly teach the robot to use or not that skill.
For example, it makes sense to have the haptic feedback only active when doing insertion tasks, such that the robot will not start spiraling when external forces are perceived but they are not due to critical insertion tasks.
It is worth noticing, that if, no insertion task is perfomed, in case of large force, the robot would temporarly stop the execution of the motion, until the distrubance is gone. This increases safety when interacting with humans.
This feedback is useful when, after the demonstration, the robot will, for example, not press the button hard enough. However, by pressing j, the trajectory is locally modified and when executing again, the robot will apply enough force.
At the end of every play back, the computer will ask if to overwrite the old trajectory or not. If the changes are accepted, the robot will remember the feedback in the next executions.