During the first two weeks, our group focused on specifying requirements by scheduling meetings with our clients. In addition to communication, we also created initial sketches and wireframes for our HCI design.
In weeks 3 and 4, our group primarily focused on the HCI design. At that time, our goal was to create a 'touchless kiosk interface maker,' and our entire HCI design was based on this objective. Although our later work took a completely different direction, the experience during this period was valuable, and our study of related projects contributed significantly to our UI design.
During weeks 5 and 6, our team experienced a shift in the development direction. Through extended communication with clients, we confirmed that we should focus on developing a unitary plugin that supports different existing kiosk UI and overrides the original touch control with gesture control. This change required us to redirect our technical research and discard irrelevant research.
In week 7, we began working on our literature review. Since we were focused on creating an extension for existing kiosk UI, our research at this point primarily revolved around existing kiosk machines, such as those found in McDonald's or TESCO. Additionally, we began working on implementation, mainly familiarizing ourselves with the MotionInput API and its main functions. Since a proper UI design was still unavailable, further research on MotionInput was challenging to conduct. By the end of week 8, we proposed a set of draft UI designs.
During these weeks, we concluded our initial research and focused on making our project more accessible, which influenced our choice of UI design. We were unsure whether to include head gesture control, but after discussing with our clients, we decided to focus on functional hand-gesture control first, and revisit head gesture control later. We also studied the MotionInput API and selected the best gesture for our UI design. We had discussed the idea of crowd detection previously, but in week 10 it became a feature that we wanted to focus on. We made some initial efforts towards implementing crowd detection, mainly by confirming the method we would use. We also worked on building our website during this time.
During these weeks, we began implementing the backend logic for hand gesture control. We encountered difficulty with navigating through options and decided to use a swipe gesture. We applied the mr_swipe event from the original MotionInput data but discovered that it was glitchy during testing. A single natural swipe would sometimes cause multiple responses, and we investigated possible causes, such as frame loss or system support issues (running on a window VM on MacOS). We also continued work on crowd detection, although it was not our primary focus. In week 15, we presented our progress in an elevator pitch. We attempted to work on our group portfolio but realized that a lot of necessary content would be missing until our project was complete. We decided to complete the portfolio after our project was nearly done.
During weeks 16 and 17, we shifted the course of our development once more, based on further communication with our clients. We were advised to focus on refining our current gesture modes and to consider the maintenance of the kiosk alongside its user. We finally located the glitch problem previously mentioned with the mr_swipe mode itself and implemented an original logic overhaul to create our own gesture. We integrated the clicking gesture into this event and changed the tracking point to make it easier to calculate and adjust sensitivity for future extensions. We also decided to implement a user feedback function to improve the user experience by displaying their hand. We successfully implemented crowd detection with Mediapipe, and began developing the web extension UI.
In mid-February, we successfully developed a fully functional system and began focusing on optimizing its various features. The web extension now has fully functional user feedback, and tests conducted within the team have shown that it works well. We concentrated on small adjustments and debugging, fixing issues like unexpected crashes and making minor changes to the backend logic to smoothen gesture control. In week 19, the client requested a starting menu, and the team started working on it.
In week 20, we implemented the launching menu with an adjustable menu using a visual C++ GUI editing tool. The team felt that it was about time to restart working on our group portfolio, so while refining the works on the frontend web plugins, we mainly organized records and previous research to provide content for our group portfolio. By week 21, we confirmed that it was impossible to implement another gesture method, so we removed several options, including the one switching face/hand gesture, from the launching menu, and reshaped its GUI.
During week 22, we conducted a live demo to test the capabilities of our system, and some issues we had not realized before were shown. Although our system worked well, the lack of proper user guidance caused some confusion during first-time usage. We built a new sample website to fit our demonstration properly, according to the client's requirements. Additionally, most of our efforts were dedicated to packaging our code, and creating a website for our group portfolio and a demo video associated with it.