**Final project: Maze or research project -- Due 12/22** **[ENGR 28 Spring 2023](index.html#schedule1_2023-12-12)** You have two options for the final project: solve a maze with the physical robot, or research a robotics topic of your choosing. Here is what my maze solution looks like: ![](https://youtu.be/-VlUNzDoUg8) (#) Research project If you want to complete the research project, choose a research or commercial robotic platform developed or deployed during after the year 2000, and create a 5-minute video presentation with slides answering the following questions: * What is the intended purpose of the robot? What needs does it fulfill? * What were the techical challenges in creating this robot? * Who were the principal developers of the robot? * Explain one technical or theoretical aspect of the system or its function in detail? * What are the outstanding challenges in this application area? Make sure you include a works cited slide, and make sure to consult multiple authoritative sources (e.g. not just Wikipedia). Here is a non-exhaustive list of some example robots you can choose from: * Mars Exploration Rovers (Spirit & Opportunity) * Stanley (DARPA Grand Challenge Winner) * SCHAFT S1 Robot (DARPA Robotics Challenge Winner) * Neal Scanlan's physical BB-8 prop robot for Star Wars sequels * da Vinci System (surgical robot) You can opt to complete this task individually or in pairs. Email me a link to your completed presentation video. (#) Maze project - getting started If you visit the [`e28-fall2023` organization](https://github.swarthmore.edu/e28-fall2020) on the Swarthmore enterprise github, you will find a `final_project` repository for you and your partner(s). Clone it to your computer. To run the code, you'll only need to run the two launch files: ~~~ none roslaunch turtlebot_bringup minimal.launch roslaunch turtlebot_bringup 3dsensor.launch ~~~ # Maze project - tasks ## Implement Djikstra's algorithm The `scripts/maze.py` file is a Python module that implements some functionality for representing mazes, but it cannot actually solve a maze by itself. Implement the function **`maze.Maze.solve(x0, y0, x1, y1)`** in `maze.py` and make sure it can solve the example mazes provided when you run the module as a script: ~~~ none python maze.py ~~~ Look for the `all tests passed` message at the end of the output before you move to the next task. ## Deal with robot orientation Dijkstra's on a grid does not consider orientation or heading; however, the robot needs to know whether to turn left or right along each intersection while driving forward from square to square. Implement the function **`maze.path_actions(path, initial_orientation)`** in `maze.py` to return a list of actions -- either `'turnleft'`, `'turnright'`, or `'forward'` that direct a robot with the given initial orientation to the end of the Maze. I also recommend that you add some tests to the **`maze._do_tests`** functon in `maze.py` to verify that your function works before moving to the next task -- if nothing else, you should print the actions corresponding to some valid maze solutions and verify they are correct visually. ## Try out the robot code Set up the robot near the center of a grid cell, facing 20 to 30° from perpendicular to a wall, as shown in the video below. !!! Warning Make sure the robot is not too close to the wall as the Kinect minimum range is roughly 0.5 m,. Then, after launching `minimal.launch` and `3dsensor.launch`, run the command ~~~ none rosrun final_project final_project.py straighten ~~~ You should see the robot turn to face perpendicular to the wall and settle after a small amount of oscillation, like this: ![](https://youtu.be/mATTNLmIBgQ) Your next task will be to improve this oscillatory behavior. Other commands you can try now or later include: | Command | Purpose | |---------|---------| | **`straighten`** | Make the robot face perpendicular to the wall in front of it | | **`nudge`** | Move the robot forward or backward to align itself in the middle of a grid cell (e.g. 1.5' or 4.5' from the wall it faces, based on the 3' grid) | | **`turnleft`** or **`turnright`** | Turn and then **`straighten`** | | **`forward`** or **`backward`** | Drive forward or backward and then **`nudge`** | ## Improve the PD gains There are six user-editable parameters at the top of the `final_project.py` file including $k_p$ and $k_d$ gains as well as maximum allowable velocity commands for both angular and linear motion. Start by editing the **`ANGULAR`** $k_p$ and $k_d$ parameters to improve the robot's performance on the **`straighten`** behavior. For now, leave the maximum commanded velocity alone. Here is a before and after comparison for this process: ![Before](https://youtu.be/mATTNLmIBgQ) ![After](https://youtu.be/oKHkojH23sk) !!! TIP When tuning gains, remember the following general principles: * More $k_p$ means faster response. * Less $k_p$ means slower response. * More $k_d$ tends to help damp oscillation. * Insufficient $k_d$ can result in oscillation. * Only change one gain at a time and carefully observe the effect it has on the robot over several trials. * Never change a gain by more than 50% at a time. E.g. increasing from $k_p = 10$ to $k_p = 20$ is too big a change. Once you have **`straighten`** looking good, work on the linear gains as you run the **`nudge`** command. Here is another before/after comparison: ![Before](https://youtu.be/I2-wLM_3qwc) ![After](https://youtu.be/tGyJT3Ts7C4) Finally, you can input sequences of commands to see whether the robot is able to complete them successfully and without too much overshoot or delay. For example, you could run the command ~~~ none rosrun final_project final_project.py straighten forward turnleft nudge turnright backward ~~~ ...and hopefully the robot would end up roughly in the same place it started. When you finish this task, **git commit** your work so you can revert to it if you need to during the next task. !!! WARNING Obviously when running commands in the maze, it is vital to ensure the robot has room to complete them. For example, do not tell the robot to drive forward when it is positioned in the center of a grid cell facing the wall. With that said, **if you *break* the maze, *fix* the maze.** ## Test on the real maze When you are satisfied with the control gains, try running the robot on the real maze. The format for the command line arguments is ~~~ solve maze.txt X0 Y0 STARTDIR X1 Y1 ~~~ where `X0`, `Y0`, `X1`, and `Y1` are the coordinates of the start and goal positions. In the maze, `X` coordinates increase to the right and `Y` coordinates increase going up, with the origin in the bottom left corner. The map of the real maze is stored in the `data/maze.txt` file and it is also drawn on the whiteboard in the lab. To get from the top left square to the central side of the upper right corridor (labeled start and goal A on the whiteboard), you could run the query ~~~ none rosrun final_project final_project.py solve maze.txt 0 3 right 2 3 ~~~ The command for start/goal pair B would be: ~~~ none rosrun final_project final_project.py solve maze.txt 3 3 right 1 1 ~~~ Make sure you can reliably solve both queries. You may need to further tune the control gains to accomplish this task. You also may find it useful to prepend an initial **`straighten`** behavior to the list of actions returned by your `maze.path_actions` implementation. !!! TIP If your robot attempts to drive through walls, it is likely that you got the starting position in the solve command wrong, or that your implementation of **`maze.path_actions`** is wrong. Double-check the sequence of commands before you let the robot run again. And **if you *break* the maze, *fix* the maze.** When you finish this task, **git commit** your work so you can revert to it if you need to during the next task. ## Make it fast! See how fast you can make your robot go without losing reliability. Can you beat my time for query A? You will need to adjust the maximum allowable velocity commands at the top of the file. You will likely also need to change some other parameters. Again, **git commit** when you are happy with your work. ## Totally optional, but possibly a fun way to spend finals week? If you want to explore further, here are some things you could try: * *Easier:* Can you have the robot refuse to complete a **`forward`** command if it would result in a collision? * *Medium:* Can you get the robot to turn on circular arcs instead of turning in place? * *Harder:* Can you figure out a control law to keep the robot centered in a corridor as it drives down (e.g. instead of maintaing a perpendicular orientation in the maze, explicitly equalizing the distances to the left and right walls)? To solve these challenge problems, you will need to get a better understanding of the starter code. Feel free to ask me for advice or guidance! # What to turn in Commit and push in your completed code via github and upload videos of your robot traversing the course for queries A and B. Send me the video URL(s).