After the final presentations of our piece, "Zwischenkoerper", I would like to make a few reflections.
Positive Notes:
- I feel like our biggest improvement between the April 29th performance and the later performances was getting the pillows to reliably communicate to the base station all at the same time. Once we accomplished this, it was much easier for the audience to figure out how the pillows related to the visualization on the projection screen.
- Andrea did a very good job each and every time with the choreography and interacting with all the various elements of our system.
- We got all three pillows to communicate!! I had my doubts, but we were able to pull it off. Thanks to Catherine and Thomas for putting in extra long hours to get them all to work.
- The Kinect worked like a charm. We never had to update the code or change anything about the motion capture system after our initial successful implementation.
Negative Notes:
- The interaction with the pillows was still much more coarse than we had originally imagined. We wanted very particular interactions to show on the screen from particular sensors on the pillows. It turned out that the best we could do is: small pillow shows up when rotated, medium pillow shows up when thrown around on the ground, and large pillow shows up when crushed or laid upon.
- I wish we would have had the growing and shrinking visualization for Andrea in the final show. I think that it would have made for a more interactive aspect in regard to Andrea.
- It would have been cool for the audience to be able to interact with the pillows while Andrea was dancing. As it was, our piece lacked the audience interaction we were originally aiming for.
Conclusion:
I feel very proud of our team and of our final project. It turned out way better than I imagined it would, and I feel like the system we created was both fun for Andrea to work with and fun for the audience to watch. Once again, the talents of our multidisciplinary team were put to good use to create a fascinating result.
Saturday, May 4, 2013
Final Piece - Motion Capture strategy and code
For our final project, we had two methods for motion capture: Kinect video processing of Andrea's movement and the various sensors embedded in the 3 pillow bodies.
For the Kinect video processing, we wanted to keep a very minimalistic and natural approach to motion capture. We saw some of the shortcomings of using the Kinect through the earlier sketches by other teams, and we wanted to keep the Kinect as "out of sight, out of mind" as possible. Through my playing around with the Kinect SDK, I first decided upon using the DepthBasics sample program to base my program on. This sample was fairly accurate at capturing depth data, and my idea was to be able to capture the convex shape of Andrea's body and track how big or how small the area was. This moved me to learn more about the User-tracking feature of the Kinect. It turns out, however, that the Kinect mush have skeleton-tracking enabled in order to use the user-tracking feature.
This led me to experiment with the SkeletonBasics sample program. I then soon found that the skeleton-tracking did not work well for having bodies upside down or lying on the floor. The solution for this problem was to rely on center of mass processing in the program. After capturing the center of mass, I was able to pipe this information into Touch Designer, where we could use it to control the visualization based on position and speed of movement.
https://dl.dropboxusercontent.com/u/104459740/SkeletonBasics.cpp
https://dl.dropboxusercontent.com/u/104459740/SkeletonBasics-D2D.exe
The next part of the motion tracking strategy was to create the three interactive pillows and capture data on each one. Our original goals for the pillow were as follows:
Small - Use accelerometer to determine orientation and speed of movement of pillow
Medium - Have multiple "hairs" (flex sensors) which would respond when pet.
Large - Have multiple squish sensors which would respond when hugged.
These goals were more or less accomplished in our final product, but not to the granularity we originally imagined. We successfully utilized the small pillow and its accelerometer. The middle pillow turned out to be very finicky, and only responded when significant force was applied (as when Andrea throws it around). The large pillow acted similarly, requiring a good deal of force to set off the sensors, and even then, they were not 100% consistent.
The lessons learned in this project were twofold: to figure out what type of motion tracking is desired and correctly gauge the time it takes and the resources necessary to accomplish it.
For the Kinect video processing, we wanted to keep a very minimalistic and natural approach to motion capture. We saw some of the shortcomings of using the Kinect through the earlier sketches by other teams, and we wanted to keep the Kinect as "out of sight, out of mind" as possible. Through my playing around with the Kinect SDK, I first decided upon using the DepthBasics sample program to base my program on. This sample was fairly accurate at capturing depth data, and my idea was to be able to capture the convex shape of Andrea's body and track how big or how small the area was. This moved me to learn more about the User-tracking feature of the Kinect. It turns out, however, that the Kinect mush have skeleton-tracking enabled in order to use the user-tracking feature.
This led me to experiment with the SkeletonBasics sample program. I then soon found that the skeleton-tracking did not work well for having bodies upside down or lying on the floor. The solution for this problem was to rely on center of mass processing in the program. After capturing the center of mass, I was able to pipe this information into Touch Designer, where we could use it to control the visualization based on position and speed of movement.
https://dl.dropboxusercontent.com/u/104459740/SkeletonBasics.cpp
https://dl.dropboxusercontent.com/u/104459740/SkeletonBasics-D2D.exe
The next part of the motion tracking strategy was to create the three interactive pillows and capture data on each one. Our original goals for the pillow were as follows:
Small - Use accelerometer to determine orientation and speed of movement of pillow
Medium - Have multiple "hairs" (flex sensors) which would respond when pet.
Large - Have multiple squish sensors which would respond when hugged.
These goals were more or less accomplished in our final product, but not to the granularity we originally imagined. We successfully utilized the small pillow and its accelerometer. The middle pillow turned out to be very finicky, and only responded when significant force was applied (as when Andrea throws it around). The large pillow acted similarly, requiring a good deal of force to set off the sensors, and even then, they were not 100% consistent.
The lessons learned in this project were twofold: to figure out what type of motion tracking is desired and correctly gauge the time it takes and the resources necessary to accomplish it.
Subscribe to:
Posts (Atom)