Understanding the needs and goals.

Requirements

About our Project

Project Background
Kiosks are found everywhere and provide a range of services. But individuals with motor disabilities are unable to use them, and physical interaction poses a hygiene risk. To solve this, our team planned to develop a touchless kiosk system controlled by hand gestures using UCL MotionInput. This approach is more hygienic, and user-friendly, and has the potential to make a significant impact on the self-service industry.
Client Introduction
Our Primary Stakeholders:
-> Prof. Dean Mohamedally (UCL)​
-> Prof. Graham Roberts (UCL)

Supported By:
-> Tom Winstanley (NTTDATA​)
-> Prof. Phillippa Chick (Intel)
-> Prof. Costas Stylianou (Intel)

Project Goals
Initially, our project goals were unclear. However, after several discussions with our client and various iterations, we established the following main goals:
  • Touchless Kiosk Interface: This interface allows users to control and operate self-service kiosks. It includes features like crowd detection, which tracks only the active user and ignores the crowd. Additionally, it is customizable with settings like gesture sensitivity, preferred hand (left or right), and more.
  • Web-Extension: This feature allows users to use the touchless kiosk interface in a browser. It includes an animated preview of the user's hand movements and is resilient, meaning it automatically restarts the MotionInput backend if it crashes.
  • App Launcher: This feature enables users to modify settings and open or close the app.
By incorporating these goals into our project, we were able to create a touchless kiosk system that met the needs of our clients and users.

Requirement Gathering

Communication with Client
Regular calls were scheduled with our client to discuss the progress and requirements of the project.
Semi-Structured Interview
Store owners were interviewed to understand the features and user interfaces of the existing kiosks. Conducting semi-structured interviews allowed us to ask more open-ended questions, gather in-depth information, and better understand the operations that our touchless interface needed to provide.
Sample response from a restaurant:
How do you use kiosk software?
We extensively use kiosk software to expedite the food ordering process for our customers. It has significantly helped us reduce waiting times.

What sort of functionality should kiosk software have?
Primarily, it should be designed for touch input, i.e., have large, clickable elements. To that end, we strictly follow a grid-based layout for simplicity and ease-of-access to elements. It should also allow for linking between elements and be able to perform actions such as adding items to the cart, applying vouchers, etc.

What do you mean when you refer to the term "elements"?
Elements include things like menu items, text boxes, image placeholders, navigation bars, etc. I would expect kiosk software to be able to add different types of elements to fulfil all our use cases and connect these elements together for an interactive experience.

What difficulties do you find with current kiosk software development tools?
Apart from the cost, a major difficulty is predicting what the final product will look like. I would love to be able to specify the exact dimensions and see an approximate preview of how the UI would look on actual kiosk hardware. It would also be nice if this preview could simulate interactions between the elements.

Do you think you are able to cater to all your users with the current kiosk software tools?
​Definitely not. Kiosk software is severely lacking when it comes to catering to disabilities. Things like text-to-speech and color-blind options do exist and are nice to have. However, a lot more features could be implemented, such as zoom and text scaling for people with vision impairments.

We are currently working on software that can replace conventional touch input with a wide range of gestures (facial, hand, eye, body, etc.). Would you be interested in such technology?
Absolutely, I think that sounds like a fantastic idea. If you could create a kiosk development tool that would be able to integrate these gestures, I think we would be able to better fulfil the needs of our customers with disabilities. It would be even better if the additional hardware required was minimal and not too expensive to add.

Personas

After conducting interviews, we divided our customers into 2 broad categories and then created personas for each to better understand their goals and requirements.​
Persona 1
Providers who want to sell their products and provide self-service kiosks..
Persona 2
End-user who will be using the kiosk software.

MoSCoW Requirements

ID

Requirement

Priority

Touches Kiosk Controls

F1.1
The system must allow controlling web-based kiosk interfaces using hand gestures such as swipes, finger pinches etc.
MUST
F1.2
The gesture controls must be relatively performant and run at at least 20fps on a mid-powered kiosk computer (i5 8th gen, 4GB RAM)
MUST
F1.3
The system must be able to track the relevant hand and ignore the disturbance from the environment
MUST
F1.4
The system should allow adjusting parameters such as gesture sensitivity, detection thresholds, and deciding between using the right or left hand.
SHOULD

End user frontend (Web Extension)

F2.1
The extension must be able to hook into any web-based interface without modifying its source code.
MUST
F2.2
The extension must be able to display a live preview of the user's hands so that they can more reliably control interfaces via gestures.
MUST
F2.3
To respect the user's privacy and avoid embarrassment, the live preview should display an abstracted version of the user's hands.
SHOULD
F2.4
The extension should allow changing the sensitivity of the gestures in real-time based on user preference.
SHOULD
F2.5
The extension should have a status box indicating if the backend software is active.
COULD
F2.6
To improve resiliency, the extension should be able to relaunch the backend if it stops functioning.
SHOULD

Communication Protocol

F3.1
The communication protocol must support duplex communication between the front and the back end.
MUST
F3.2
The communication protocol must support the manipulation of config settings and allow tweaking parameters such as sensitivity in real time.
MUST
F3.3
The communication protocol must not use any remote endpoints and work without network access to maintain security.
MUST
F3.4
The communication protocol should provide an API for external use and use a consistent JSON schema.
SHOULD
F3.5
The communication protocol should work even in the event of MotionInput crashing and should be able to relaunch it to improve resiliency.
SHOULD

Source Compilation

F4.1
There must be an easy-to-use compilation script that can compile the backend into an executable that doesn't require additional dependencies.
MUST
F4.2
The executable must target the Windows operating system from Windows 10 and above.
MUST

Touchless Kiosk Launcher

F5.1
The launcher must not display the MotionInput live preview screen or open the windows console.
MUST
F5.2
The launcher should have a single launch button which starts the server and MotionInput.
SHOULD
F5.3
The launcher must allow the administrator to set the desired sensitivity of the swipe gesture.
MUST
F5.4
The launcher must allow the administrator to set the desired hand (right or left).
MUST

ID

Requirement

Priority

NF1
The code must be open-source and easily understandable so that future developers can access the API and backend, enabling them to create their own extensions.
MUST
NF2
The code should be well-maintained, with bugs regularly addressed, and updates made available for future compatibility.
SHOULD
NF3
The system's structure should be straightforward enough for individuals without extensive technical knowledge to deploy it easily.
SHOULD
NF4
Writing documentation could increase convenience for users and developers, both for utilizing the system and for further development.
COULD

Use Cases

ID

Description

Action

Result

UCC1
Navigate to a different option
Swipe to the corresponding direction
System switched to a new option corresponding to the direction swiped
UCC2
Proceed with the current option
Pinch index and thumb
System selects the current option
UCC3
Adjust sensitivity during use
Drag the sensitivity slider in the web extension UI
Sensitivity adjusted without restarting MotionInput
UCC4
Relaunch MotionInput
Choose the ‘Relaunch’ bottom in the web extension UI
MotionInput Relaunched

ID

Description

Action

Result

UCS1
Adjust sensitivity
Drag swipe sensitivity slider in the settings menu
Swipe sensitivity applied for next launch
UCS2
Adjust max hands detected by the system
Drag Max Hands Detected slider in the Settings menu
Specified number of hands will be detected for next launch
UCS3
Switch controlling hand
Choose desired option in the settings menu
Left/Right hand detection for next launch
UCS4
Switch Default Camera
Click the Default Camera box in the settings menu. Select desired option in the drop-down menu
Chosen camera will be used for next launch
UCS5
Launch MotionInput
Click the Launch bottom in the settings menu
MotionInput Launched
UCS6
Exit from settings
Click the cancel button in the settings menu
Settings menu closed
UCS7
Shut down MotionInput
Click the exit MotionInput button in the settings menu
MotionInput terminated along with the intermediary server