Some time ago I announced my final project for my Embedded Systems class: a voice activated chess-playing robot. Turns out another group did something similar a couple semesters ago, and they called it Kasparobot, clearly named after Gary Kasparov. It played with a chess engine by registering moves through OpenCV. Inspired by the name, I’ve christened our robot CarlsenBot, after Magnus Carlsen. I spent a few days looking at similar projects, namely this one: letsmakerobots.com/node/20833. It’s very well documented, and the various pictures help immensely. I’m very glad I found this, as it will provide a most excellent skeleton for my own project. My team and I spent a few hours digging through the parts in the lab and rigging together this setup:
Zoom in on the first picture. The green thing is a toothed rack, which turns rotary motion from a motor into linear motion with a gear that travels on the rack. We plan to place racks on the two arches to create movement along one axis, and create a movable platform positioned by motors and pinions, that will allow movement in the other two axis. You can see a claw that looks like it came from the same set we found these racks.
* To my annoyance the arches are slightly less than parallel, but I suppose if the racks are position parallel, it doesn’t really matter.
We will have to order a few new parts pronto, and assemble the platform I have in mind.
In parallel, we also have to get a program running to recognize dictated commands. For that, I think we might use this: http://msdn.microsoft.com/en-us/vstudio/cc482921.aspx. We’ve also got another option in EasyVR, which is a voice recognition hardware module: https://www.sparkfun.com/products/10685. It’s got an advantage of having a library already created by some other user on our microcontroller platform, but has the disadvantage of needing to be trained.
Hopefully a rough prototype can be completed this week, and finished next week, with plenty of time to debug before the demo.