Documentation is a critical part of the making process. It allows us to share our work, learn from our mistakes, and teach others. So why is it often undervalued or neglected? Simply put, good documentation is difficult. It requires additional time and effort, but more importantly, it requires us to stop making and redirect our attention every time something interesting happens.
This can be especially difficult when it comes to making physical objects, where images and video are crucial to making documentation comprehensible. However, standard setups require keeping track of camera battery and memory levels, and taking the time to charge, set up, and take down rigs. This work is followed either by one long video or several shots which are tedious to edit, and make it difficult to pick out key moments. My goal for this project is to design, build, and test a device which allows makers to capture and organize video without taking them away from their work, and to ease the post-processing work of converting raw footage to a usable end product. To meet this goal, my project is centered around a camera device which is meant to sit on a workbench in a classroom. It includes a camera, buttons, RFID reader, and a raspberry pi, among other components. As shown in the primary use case scenario, the only setup required of the user is to place their ID card on the device, press the record button, and hit other large buttons if there is a moment they deem important enough to mark. The flexible neck of the device allows it to be moved around easily and placed in the right position. The rest is taken care of behind the scenes. The flowchart below includes an initial concept for the code structure which will run on the raspberry pi. Once a user places their card on the reader, all video which they capture will be automatically organized, edited, and saved to a folder in their google drive account. If they prefer more control over the video, they can easily access videos which have been cut, sorted, and renamed in an organized fashion based on the times they marked moments as important. Rather than searching through hours of video, they should be able to manage their library more easily, without the difficulties of limited camera storage and transferring files. However, the device also takes this a step farther by generating its own video edit. Based on the user’s button presses, the algorithm can approximate the most important time periods in the user’s process. It then speeds up the less important portions while leaving the important parts to play at their normal speed. This condenses the build session into a single short video which shows the user’s entire process while highlighting key moments, but does not require them to spend any time editing. I hypothesize that removing the barrier to entry which editing creates will encourage makers to create and share video documentation of their projects who otherwise would not. This device also has the potential to aid asynchronous online maker education. Instructors need finer control over their video edits to add audio, and enhance their explanations overall. This doesn’t lend itself well to the algorithmically generated edits, but would most likely benefit from the other automatic features of the device. The automatic uploads remove memory management concerns, and the automatic organization simplifies the editing process, as shown in the secondary use case scenario. Because this project is meant to be accessible to instructors and makers working at home due to COVID-19, it is built entirely out of parts that are easy to order online, or can be adapted to the materials at hand. All plans, code, and documentation necessary to recreate the device will be open-source so that people can make it at home, and so that the maker community can improve on the design. In that same spirit, the device is designed to allow for additional inputs, so that it can record and edit based off of devices similar to those proposed by Patricia Yu, Miranda Luong, and Elizabeth Han.
0 Comments
Natalia Zeller MacLean is a summer intern at the HCII and a rising senior (Class of 2021) in Mechanical Engineering at Cornell University. She serves as a mechanical and systems engineering team lead on Cornell Cup robotics, developing both modular educational robotic kits and a larger autonomous robot which explore human interaction with independent mobile robotics. In her free time, she fences epee on the varsity team, reads classical texts in latin, and tinkers with any broken mechanisms or electronics she can find. As a summer 2020 REU fellow, Natalia is prototyping a smart documentation automated timelapse tool to support at home documentation practices. |
Blog chronicling project activities and student work.
Areas
All
Archives
May 2021
|