Senior Design Update #4

Another small step for this Senior Design group:

Since the last time, we’ve made the wrist move, and the ugly brown box on the upper arm houses a bunch of motors that connect to the fingers, though they’re not connected in the picture thus we’re not moving the fingers. I’m afraid the weight of the hand is creating a lot of strain on the shoulder motor, and the whole thing is still very shaky. I’m going to try some new movement methods to try to smooth it out, but I think we are approaching the hardware limitations.

Once we get the motors for the fingers connected to their respective digits, then we will hopefully finally have a fully functional arm!

Advertisements

Senior Design Update #3

I’m back with another update to my senior design. Check it out below:

If you’ll excuse the suspicious lack of a thumb*, it’s coming together rather nicely. There are still a few kinks to work out. Motors are mounted onto the upper arm, and those pull drawstrings attached to the fingers in order to get the fingers to move. We manged to burn one out by stressing it too much, so we might have to upgrade them. I’m a little apprehensive about doing that because I don’t want to put any more weight on the arm, and more weight usually comes from better equipment. But we’ll see. And it’s tricky finding a configuration for the motors on the limited space provided so that the motors don’t interfere with one another’s operation. Oh, and it’s so much fun to play with! What’s NOT shown that we got working shortly after is that it rotates about the wrist, too! Using an accelerometer that’s mounted on the glove as well, we are able to detect the tilt off axis when the hand is roughly horizontal, and use that as the motor reference.

So this is roughly the other half of the equation, our glove/hand combo. In my last post there is a video of the rest of the arm working(ish):

As soon as the kinks are worked out with the hand, the next step will be to operate the two parts in unison. Then we can move onto the testing phase to make it demo-ready.

Senior Design update #2

This is a very exciting post – big development on the senior design. It was technically accomplished shortly after the last post, but it’s been an (even) busier time lately. On Oct 25 I participated in IEEEXtreme, a 24 HOUR programming competition. Keep in mind that I do not consider programming my main trade. Despite that, my team and I ranked 11th in the US 😀 And last weekend, I was flown out of state for an interview, so I spent the week leading up to it doing interview prep.

Onto the real heart of the post though. Observe our progress:

Cool right? So how do we do it? I present the picture below.

elbow and shoulder live test

(Nevermind my expression, I may not have been all there…) As per the last post, when we got the 3D coordinates of joints out of the Kinect program, we immediately got to work on this. The basic idea is that given points, you can extend vectors between them. Then the angles between vectors A and B satisfy cos(theta) = (A dot B)/(|A||B|). It remains then to find reference vectors for our angles, which is what the picture shows above. Now, the elbow is eeeeaaasssyy, since it only bends one way. But your shoulder is more complicated. Our system is situated so that one degree of freedom of movement allows your outreached arm to sweep in front of you, and the other degree of freedom sweeps vertically normal to your outstretched arm. Neither directly provides a clear reference directions.

Currently, one angle is provided by placing a vector through my head (figuratively), and the other along my upper arm. The other angle is provided by placing a vector traversing my shoulderblades, and the other also through my upper arm, BUT THEN PROJECTED onto the horizontal plane containing my shoulders. It’s not a terribly elegant solution, since the projection becomes smaller the lower the arm generally becomes. I’m considering teaching myself a kinematics crash course to see if anything can help us on our project.

Until next time!

Senior Design Update #1

Time for the first update on my senior design. The first thing I thought we should do is to do set up everything Kinect related. The software is a big part of our project, and it can be worked on anywhere as long as we had our Kinect with us. We are doing all the PC programming in Visual Studio C++, because Microsoft has a well established SDK for it, and it actually released version 1.8 just a few weeks before we began this project. To get the angles between the joints that we need, we need to first retrieve the coordinates of joints in 3D space. We used this tutorial which helped us create a simple program: http://mzubair.com/getting-started-building-your-first-kinect-app-with-c-in-visual-studio/

From this, we gathered that all the data we need is inside a data structure “myFrame” of the type “NUI_SKELETON_FRAME”. “myFrame” has a field called “skeletonData”, which is actually an array, which is because the Kinect library is capable of tracking multiple objects. That’s irrelevant though, since there is only one use currently, thus the data of interest is in “myFrame.skeletonData[0]”. For any tracked person, “skeletonData[i]” has a field “SkeletonPositions”, which is yet another array, where is entry is a 4-tuple (w, x, y, z), and that is what we need. To index appropriately, the code defines an enumerated type “NUI_SKELETON_POSITION_INDEX”, which elements such as “NUI_SKELETON_SHOULDER_LEFT”, which will index into “skeletonPositions” to get you what you want. Here is the code to print out the 3D coordinates for the right shoulder:

cout << "(";
cout << myFrame.SkeletonData[0].SkeletonPositions[NUI_SKELETON_POSITION_SHOULDER_RIGHT].x << ", ";
cout << myFrame.SkeletonData[0].SkeletonPositions[NUI_SKELETON_POSITION_SHOULDER_RIGHT].y << ", ";
cout << myFrame.SkeletonData[0].SkeletonPositions[NUI_SKELETON_POSITION_SHOULDER_RIGHT].z << ")";
//NOTE: "w" or any joint is always "1"

So how do these positions of various joints translate into angles? A rudimentary way involves using the dot product. Suppose we have a vector that
– starts at the elbow and extends to the right elbow (call this vector u)
– another vector that starts at the elbow and extends to the wrist,  (call this vector v)

Then the angle “theta” between them satisfies u (dot) v = |u||v|cos(theta). We may have to do some fancy things like filtering on the data, but I think that this will be the main idea in obtaining the angles.

To close, here’s part of the hand that we plan to make the robot’s end affector:

handThe picture cuts it off, but here’s how it works: it’s rough anatomically correct, with all the joints where they’re supposed to be. As shown is the “default” state for the hand. There’s a string that’s fixed to the end of a finger, and it is fed through the inner of the finger (the “bones” are hollow rubber tubes). The hand part is also constructed of a series of hollow tubes encased in foam, so that there is a path from the fingertip to the bottom of the hand, where the other end of the string comes out. When the string is pulled, the finger bends, and when the string is released, the finger returns to the default state. The idea is to tie the end that comes out of the bottom of the hand to a motor, and correspond servo motor pulses to the degree of finger bend. Here’s a video describing what I mean:

(Cameron Reid featured in the video)

Well, I think that’s enough for one update. Look out for the next one where I will hopefully have the shoulder working.

Senior Design Proposal

Alright, just this class stands between me and my undergraduate degree: Senior Design. As my last post indicated, my project involves a robotic arm with a kinect interface. I am completing the project with the help of my group members Cameron Reid, Chris Stubel, and Carlton Beatty. Here is the brainstorm doodle from the last post:

2013-10-14 12.03.27

The premise is pretty simple. You (the user) move your arm, and the system tracks your movements and projects it to a robotic arm, mimicking your actions in real time. Realistically, this can be used to introduce the human element where humans can’t safely go, such as bomb disposal, battle situations, disaster relief. Unrealistically? Well, maybe you’ve heard of a little movie recently called Pacific Rim…I think that would be pretty cool.

Our plan is to use a simple entry level robotic arm, such as this AL5D by Lynxmotion:

al5dso as to avoid designing our own arm, which is more work than we want to do on the time restriction. We’ve got an arm to control, but what’s going to be doing the tracking? That’s where the Kinect comes in. The Kinect is a incredible piece of technology. It’s got sensors out the wazoo, and Microsoft has a great SDK that goes along with it so anybody can make apps utilizing the Kinect. In particular, they’ve got a skeletal tracking library, which will enable us to detect and retrieve skeleton joints in 3D space. We will get (at least) all the major joints on the arm: shoulder, elbow, wrist, and turn the coordinates into the angles that the limbs form. These angles will get transmitted to a microcontroller that control servos on the arm.

Now, look at the picture above, and look at what’s on the end. I want to take that simple claw, and instead put in its place an animatronic hand. By putting something more like a hand there, I’m hoping we can give this system more dexterity. Definitely not to the degree of our own hands, but at least more than the claw that the arm comes with. To accomplish this will take some creativity, since the Kinect tracking system we are using for the arm doesn’t have the resolution to track individual digits. We are going to construct a glove outfitted with flex sensors over the fingers. As a finger bends, the sensor reading is read by a microcontroller and the microcontroller in turn controls motors to move the fingers.

Here is the above in flow diagram form:

Senior design top level flow chartLike I said previously, this project is actually in progress already, so the next post will include the first update.

Senior Design Proposal

I’m excited to announce my senior capstone project: Robotic Arm with Kinect Interface (it’s a …working title). Let me show you my vision with this amateur sketch.

2013-10-14 12.03.27The idea is simple: to make a robotic limb mimic a human user’s arm. However, I won’t go into details in this post. The next posts will go into the project in more detail, as well as updates into the current progress. So excited!

Project Euler 139 – Pythagorean Tiles

Problem: Consider the picture below (image credits to Project Euler)

The four triangles are assumed to be right triangles. When placed in the arrangement as shown, there is a hole in the middle. Consider all right triangles that have perimeter less than 100,000,000, how many of the right triangles make a hole such that the hole can be used to tile the larger resulting square?

For any right triangle, let the smaller leg be “a”, the longer leg be “b”, and the hypotenuse “c”, so that any triangle can be identified by a tuple (a, b, c). It then follows that the hole is a square with side lengths (b-a). From the image, it also follows that the larger square has side lengths equal to the hypotenuse of the constituent triangles. For the larger square to be able to be tiled by the hole, its side lengths must then be an integer multiple of the hole’s side lengths, or put simply: c = (b-a)k, for some k. This would be a simple test, supposing that you could generate all pythagorean triples (a, b, c) such that the perimeter is within the bounds of the question.

Given the problem parameters, it wouldn’t be prudent to iterate blindly over “a”, “b”, and “c” to generate the triples. The easiest way I know how is by Euclid’s formula (http://en.wikipedia.org/wiki/Pythagorean_triple#Generating_a_triple). For any pair (m, n), the following are always a Pythagorean triple:

a = m^2 – n^2

b = 2mn

c = m^2 + n^2

You can compute a^2 + b^2 = c^2 to see for yourself. Moreover, this formula is complete, in the sense that you can generate all primitive Pythagorean triples with this, primitive triples being (a, b, c) such that the greatest common divisor of the triple is 1. The triple is guaranteed to be primitive as long as (m, n) are coprime and of opposite parity. Lastly, as we all know, given a Pythagorean triples, the same multiple of each length is also a Pythagorean triple. Armed with this, we can generate all triples.

To make sure that that each primitive we generate is unique, assume that m > n. This also reduces the number of times we have to iterate considerably. Again, the process we’ll go through generates primitive triples, but it is a simple matter to calculate how many similar triangles also fit the perimeter bound, simply by dividing 100000000 by the primitive triple’s perimeter.

Here is the code that accomplishes all that.

for(n = 1; n < 10000; n++){
    for(m = n+1; m*m+n*n <= 100000000; m++){
        if((m-n)%2 == 1){
            if(gcd(m, n) == 1){
                x = m*m - n*n;
                y = 2*m*n;
                z = m*m + n*n;
                if(x+y+z < 100000000){
                    if(z%(abs(x-y)) == 0) count+=(100000000/(x+y+z));
                }
            }
        }
    }
}

GCD is done with this little code snippet. It’s very efficient and very useful to have:

int gcd(int a, int b){
    if(b == 0) return a;
    else return gcd(b, a%b);
}

In the end, count = 10057761