Final Competition

The culmination of the semester

Wanna know more?

Objective

This class was meant to help us all learn more about intelligent physical systems (i.e. robots that can make their own decisions) through making our own. Our robot is named Arduino Schwarznegger, and we wanted him to be able to do the following in time for the final competition:

Procedure

Each week there was a new lab or milestone to finish that would help us achieve the objectives above.

Let's Break it Down

Hardware

Arduino's Skeleton

Arnold has a soldered IR receiving circuit and audio detection circuit that hand on top of his top chassis. That top chassis also has his IR emitting hat. In his middle chassis, he has a multiplexer that takes in signals from the wall sensors and line sensors. That mux is on a protoboard that simplifies the power and grounding for the mux, sensors, and motors. There are two inner ground rails that are shorted together with two outer power rails on the main protoboard. The mux sits on a carriage in the center of the board with the inputs going in as an ordered line for debugging purposes.

Arnolds bottom chassis contains two batteries, for the arduino and for the servos. It has one battery pushed forward to balance it and reduce the possibility of “jumping” in the GUI (it didn’t).

Software

Bits and Bytes

For each hardware element of our robot, there was a software component behind it. The full code can be found here, on our Github.

To start, Arduino used the acoustic circuit from Lab 2 to detect a 660 Hz. As described in that lab, the acoustic signals were detected by the microphone in the circuit and run through an FFT, which put the strength of each signal at every frequency into a series of bins. To ensure that the FFT wouldn't cost our robot a lot of computing power and time, the bin that corresponded to the 660Hz signal was checked at the beginning of the competition and never checked again. The audio_detected function did this, and once it returned true, the robot was on its way through the maze.

For the actual maze traversal itself, once the 660Hz signal had been detected, our depth-first search (DFS) algorithm was called. This algorithm was described fully in Milestone 3, but to give a high-level overview, it used a recursive function to proceed to every intersection within a specified-size maze. The movement of the robot was encoded into priorities, where moving forward took precedence (if there was no front wall of course), followed by turning left (if there was no left wall), and turning right (if there was no right wall). Since the function was recursive, if a path had been completely followed to visit all possible nodes along the way, the robot would then backtrack through the function stack. This was a more simplified final algorithm, but also one that required more complicated debugging along the way.

Something interesting that this algorithm required, especially when it came to backtracking, was keeping track of direction. North, south, east, and west were static while in the maze, but our robot was constantly turning and referencing front, right, and left instead. This meant that we had to store the global direction, as well as a local variable to reference the previous direction the robot had been if it had visited that node before, so as to fully understand which walls needed to be checked with respect to its original orientation.

This algorithm also ensured that there would be no collisions with other robots. As completed in Lab 2, there was a circuit built to detect IR signals at 6kHz. Unlike the acoustic FFT, this FFT was running constantly, costing us computing time but also making sure that we didn't run into our competition. To fully integrate this circuit within our robot, there was a robot_detected() function, which would check the bins relevant to 6kHz and return a boolean to indicate whether a robot was nearby. The circuit had a small range of detection due to the phototransistor, so it was pointed forward on our robot to check and see if it would run into any robots ahead. Our concern was instigating a crash, not necessarily generally participating in it, and it would be clear that we had caused the crash if we had run into another robot in front of us.

If a robot was detected, robot_reacted() would be called, a function we wrote to turn the robot around, proceed to the intersection it had just visited, and continue traversing through the maze as normal. It was essential that we continued with our recursive DFS implementation, even if a robot had been detected, because we needed to explore as much of the maze as possible and the backtracking aspect of our algorithm would allow us to do that.

All of this information had to be confirmed by an actual human, so, using our code from Lab 3, the location of the robot (stored using global variables that were updated in our DFS functions), the walls surrounding it (also updated from the DFS functions), and whether or not a robot had been detected were all transmitted in a compressed bit stream to our base station radio.

Restructuring

Making Everything Look Good

We made sure to keep our code organized and easy-to-understand. Throughout the semester, we accumulated many files, some of which were valuable, others not. Fortunately, we implemented a number of methods to help keep our code clean and our software easily-manageable.

We maintained our software with strict version control on both lab and milestone .ino files, via clearly labeled folder names and clear git commit messages on our GitHub repository. This basic filename convention was a simple start to our file management.

Our team also used GitHub effectively. We never had to deal with any git merges or other branch issues. This relied on effective communication between members when making changes, pulling, or pushing code when members wanted to separately test/debug software.

Thirdly, our team members effectively comment all of the code that we type, so as to make it easy for code to be passed between teammates and understood quickly (time was of the essence).

Lastly, after we finished our final code, we spent time writing separate header (.h) and C++ (.cpp) files to import into a main file, to make our program shorter, simpler, and easier to read. Although we could have refractored even further, we ended up creating four different classes for four crucial systems on our robot.

The first class, Locomotion, imports the Servo library, instantiates the servos for the wheels, and has functions that write to individual wheels. One instance of Locomotion is created in our dfs_main file.

The second class, Radio, imports several libraries used for radio communication between our robot and our base-station Arduino. It defines a single function, pingOut; since several other radio-functions used variables from dfs_main, we found it simple to leave those functions in dfs_main. One instance of Radio is created in our dfs_main file.

The third class, ToneDetect, imports the FFT library, and contains two functions: audio_detected() and robot_detected(). These are, quite obviously, used for detecting 660 Hz sound and 6 kHz IR light. One instance of ToneDetect is created in our dfs_main file.

The fourth class, WallSensor, is used to contain the code for reading wall sensor information and detecting if a wall is present. Three instances of WallSensor (for the front, left, and right wall sensors) are created in dfs_main.

The rest of dfs_main contains mostly algorithms and functions for traversing the maze according to the DFS algorithm.

Result

The Final Product

Arduino Schwarznegger was by no means the best robot out there, but it was a robot that we are proud of in what it could do and what it accomplished during the final competition. We hope you enjoy our work; we’ve had a heck of a time creating it!