- 24 May, 2021 4 commits
-
-
Günter Niemeyer authored
The planners: (i) AStarSkeleton.py comes from 133b HW#1, running directly on a grid. But updated for ROS and to run backwards (from goal to start), in case the start is moving. Please FIX the distance and other costs!! (ii) prm_mattress.py is straight from 133b HW#3. You'll have to remove the visualization and update for our map, etc. Also note it was originally written for python 3, though the differences are small (see Ed Discussion for a post). The moveskeletonv2.py adds two features: (a) it publishes the waypoints for RVIZ - nice to see and debug. You will need a MARKER display in RVIZ to see. And (b) it checks whether the TF library is properly set before starting. Else startup (could) trigger temporary TF errors, always annoying.
-
Günter Niemeyer authored
This slightly tweeks (i) the time constant with which I expect the motors to stop, and (ii) the minimum velocity I still consider moving. This in an effort *not* to drop encoder ticks as the bot comes to a stop. Note it is still *very* difficult to accurately assign ticks when the direction suddenly changes. So we recommend coming to a stop, waiting at least 0.5 second, probably a full second, then driving again. That is no immediate direction changes (per wheel).
-
Günter Niemeyer authored
This estimates the time between when the depth image was originally taken and the time it arrives in the callback as 80ms. I calibrated this against the odometry, so that sudden movements show up at the same time in both channels, keeping the data as self-consistent as possible. Note, I would really like to use the timestamp provided by RealSense, as the library actually knows this info. Except the provided stamp is bad (somehow corrupted, not monotonically or smoothly increasing). So this is our best alternative.
-
Günter Niemeyer authored
This (i) makes sure at least 'minimum_contacts' happen at the closest range, and (ii) averages those contacts. This is an attempt to reject noisy camera data (older RealSense, likely broken). But I'm not convinced it is fundamentally better - despite the averaging, the ranges are tempporally no less stable.
-
- 19 May, 2021 2 commits
-
-
Günter Niemeyer authored
Added the skeleton for the move node. Please update as needed, in particular to (1) command the robot to move to a point, (2) include a planner, (3) udpate the obstacle map.
-
Günter Niemeyer authored
The V4 version fixes a bug that read the wrong transform in the callback handling the RVIZ reset. Everything worked when the bot hadn't moved. But once the bot moved, the wrong transform meant the reset didn't act correctly. This only changes one line, fetching the odom->base transform, which accidentally fetched base->odom. Try diff fakelocalizev3.py fakelocalizev4.py to see the one line difference.
-
- 18 May, 2021 1 commit
-
-
Günter Niemeyer authored
This in an effort to fix issues. It should be redundant and unnecessary. The depthtolaserscan node should also be publishing the camera -> laser transform. They should be the same, but to avoid contention I had removed the laser frame from the URDF. Apparently the depthtolaserscan node may not be broadcasting, so added back in to make sure someone is.
-
- 17 May, 2021 3 commits
-
-
Günter Niemeyer authored
This now has methods to provide (a) sine/cose of theta values, and (b) transform a point to the parent frame. These may be useful in the localization, as the map origin provides yet another frame.
-
Günter Niemeyer authored
This helps line up the depth points better. Careful - the 17deg may be different on different bots.
-
Günter Niemeyer authored
This accounts for my errors in depth at larger depth levels.
-
- 14 May, 2021 2 commits
-
-
Günter Niemeyer authored
This only changes the queue size of the scan subscriber. By setting the size to 1, the incoming queue will drop any messages while the previous message is still being processed. Preventing the code from getting behind. Also updated the RVIZ Laserscan buffer, so it can better handle slow updates and still show data correctly,
-
Günter Niemeyer authored
This adds a fakelocalizev2.py that provides the same functionality, but introduces/uses a PlanarTransform class. The class encapsulates the x/y/theta transformations, so hopefully the main code is a little easier to read/follow.
-
- 12 May, 2021 2 commits
-
-
Günter Niemeyer authored
This provides a framework for the localization node, setting up the necessary subscribers and transforms. The actual alignment code is *not* provided.
-
Günter Niemeyer authored
The launch file now demonstrates how to change the default parameters. Also tweaked the subscriber to record a timestamp first thing, so the scan data is associated with the depth image arrival time (not the time of its publication).
-
- 10 May, 2021 2 commits
-
-
Günter Niemeyer authored
This package includes the depthtolaserscan node, which converts the RealSense depth image into a laser scan. Note the python version runs very slowly (0.3Hz!). So I converted to CPP, which runs happily @30Hz. Simply catkin_make and use the non-python executable, i.e. rosrun depthtolaserscan depthtolaserscan There are a few parameters, including the minimum height above the floor (to avoid the ground being an obstacle), the maximum height (so we can drive under tables), as well as a sample density. The defaults work well for me, but I am curious how sensitive they are to other bots?
-
Günter Niemeyer authored
Added the example package to show my versions of the low level wheel control and odometry nodes. This also includes the launch files. As well as a timestamp fix to the pointcloud, without which you can not drive the robot and see the pointcloud in the correct place. Try roslaunch example odometry.launch roslaunch example odometry_with_pointcloud.launch run together with the teleop rosrun shared_demo teleop.py And more importantly, take a look at the files and let me know if you have questions!
-
- 05 May, 2021 1 commit
-
-
Günter Niemeyer authored
The uses two distinct URDF files and distinct mesh files to create two bots: (a) bot_front_wheels.urdf and bot_front_caster.urdf. The camera is always in front, but the bot direction is inherently reversed. It also includes two demo launch files: display_front_wheels.launch and display_front_caster.launch to show the two options. Finally, the URDF files include the frames for the depth camera and artificial laser scaner, so we can process the camera data correctly. Note the depth camera angle is calibrated to 20deg (using the depth camera to make a vertical object appear vertical). Hopefully the angle is consistent between all bots.
-
- 03 May, 2021 1 commit
-
-
Günter Niemeyer authored
Added a very short python script in the shared_demo package, to demo reading the depth image and extracting the distance.
-
- 30 Apr, 2021 1 commit
-
-
Spencer T. Morgenfeld authored
-
- 28 Apr, 2021 2 commits
-
-
Günter Niemeyer authored
Added a simple keyboard-based teleop node to the shared_demos package. This uses the curses library to read keystokes and publishes /vel_cmd messages.
-
Günter Niemeyer authored
We added a bot_description package, which contains the bot's URDF file as well as the corresponding STL meshes. It also provides a launch file and rviz configuration file, to demo/show the URDF.
-
- 22 Apr, 2021 1 commit
-
-
nardavin authored
-
- 21 Apr, 2021 1 commit
-
-
Günter Niemeyer authored
This added code to (a) demo a basic ros node structure, subscribing to /wheel_command and publishing /wheel_state. (b) create "wheelcmd" node to send testing /wheel_command messages (c) Matlab code to read ROS bags, especially with JointState messages.
-
- 12 Apr, 2021 3 commits
-
-
Günter Niemeyer authored
Added a map (pgm and yaml file) from last quarter's Gazebo house. Just to have something to look at/test with.
-
Nicholas A. Ardavin authored
-
Nicholas A. Ardavin authored
-
- 11 Apr, 2021 2 commits
-
-
Nicholas A. Ardavin authored
-
Nicholas A. Ardavin authored
-