Tanay Pant has done some great work making a virtual assistant that is open for others to learn from and use. Melissa has been improving, recently gaining its own UI which brings it out from the command line! I spoke with Tanay about the UI process.
Melissa might sound familiar to some readers, we spoke to Tanay earlier this year about his virtual assistant and Tanay himself was nominated as our top Dev Diner Emerging Tech Inspiration for the Internet of Things and Artificial Intelligence in 2016! Building a UI interface for a voice system is a big undertaking, and many people approach it from different ways. I’ve got my own UI for several virtual assistants… and they’re both different from each other! It is still early days, so it is incredibly valuable to hear from others who’ve been there and see what lessons they learnt, so you can approach it from a more informed perspective!
Tanay’s approach was to get some talented help — he worked with Nakul Saxena, a user experience designer, to find the right UI for his assistant. As Tanay explains,
“As we progressed from the early day Melissa to the present day Melissa, it was evident that we are no longer focused on providing yet another virtual assistant that runs on the command line. We wanted our users who run Melissa in a graphical environment to have a unique experience that makes them feel different. Something different from the everyday mundane life. We adopted the UI interface that was built by Nakul Saxena and integrated it with Melissa. From here on, we have adapted the Google Chrome STT [Speech to Text] and this helped us in getting rid of various modules which made installation of Melissa a bit tricky. Right now we are also working on making Melissa’s installation even easier by distributing it as a package on PyPI so that it can be installed with just a simple pip command.”
Tanay’s biggest challenge was finding the right speech to text service to use.
“The inclusion of the right Speech to Text (STT) engine is the biggest challenge that usually comes up when giving a virtual assistant a UI. When we are adding integrations that are essential to the basic runtime of the software, we usually avoid services that require the user to acquire authentication credentials. That limits the choice of STTs that we can utilise, especially in mobile platforms.” — Tanay Pant
The most common inspiration I see in so many teams with emerging tech is sci-fi movies! Tanay’s approach was inspired by Iron Man. My own approach in one of my assistants was inspired by the UI in the DC Comics TV shows like The Flash and Arrow. What would we do without these crucial bits of popular culture to guide us?
“The user interface should make the user feel special. Apart from the functionalities, the UI should be developed in such a way that reminds the users of all the awesome action that they see in the sci-fi movies and actually makes them feel like Tony Stark. Drawing from that inspiration and train of thought, we adapted the idea of the user interface from Iron Man’s Jarvis, which in turn I believe was partly inspired from the console of the F-16 fighter jets”, Tanay says, highlighting something that you don’t actually hear quite as often from developers — making the user feel special. Or even something as simple as making them smile. That’s a lovely goal.
“I know that it might seem cliched but it’s really worth it when someone installs Melissa for the first time, see the animated rings rotate about and I get to see the smile on their face.” — Tanay Pant
There is so much still to explore in the area of virtual assistants and UI. I’d even say that not even those interfaces created by the major players have it completely right just yet. It’s a tough problem to solve. Even Tanay is keen to continue to find ways to improve the UI in Melissa and knows what they’re aiming for:
“I have always felt that good design should focus on intuition and simplicity. You might argue that Melissa’s UI does not strictly adhere to these elementary principles. Well, the UI is still a work in progress and we have many high-minded ambitions for this project. This project is going to get more and more exciting with each passing day.”
Tanay was even nice enough to share a bit of an exclusive piece of info with us (thanks Tanay!):
“Melissa was always destined to have a UI. Only after Melissa gained considerable momentum, the realisation to add UI dawned upon us. I’ll tell you about something interesting that we are planning to do and Dev Diner is the first to know about this. Since both Melissa and WebVR are my true love, I am planning to work on the intersection of both the technologies. Imagine how cool it would be if apart from a web UI, you could also put on a VR headset and experience interaction with Melissa in a virtual environment. STT choices, revamp of the architectural foundation and specifically focussed new design principles for Melissa’s VR UI are some of the challenges that we would face while trying to make this dream a reality. However, it is something that would definitely be rolled out in the near future.”
Exciting times! I can’t wait to see where Melissa goes from here!
Thank you to Tanay for taking the time to share some more insight into his Melissa project! Melissa is an open source project on GitHub and they are looking for contributors to help develop it and provide suggestions for improvements, so say hi, file issues on their GitHub and get involved!
We talk all about the latest in VR/AR, IoT, AI, robotics, maker news & more, followed by Q&A & live tech tinkering!