Version 1.2 of the Jovo Framework (and 1.1 of the Jovo CLI) is now available. Our goal is to support professional developers and teams of any size to build great voice and multimodal apps. This release is a big step for us into this direction. Read on to learn more about what’s new.
⭐ Do you like Jovo? Show your support by giving us a star on GitHub ⭐
Let’s make some money! Amazon announced In-Skill Purchases (ISPs) for Alexa in May, giving developers the ability to monetize their Alexa Skills by selling digital items like additional content. We believe this could become an inflection point for voice user interfaces, as now more professionals have the incentive to invest time and money to build engaging experiences with the potential to be monetized. This is why we worked hard to implement ISPs into Jovo as soon as possible.
The easiest way to get started is to download our template and follow the steps in its README:
Advanced voice apps and teams call for more sophisticated deployment processes than just copy/paste. With this newest addition to the framework, you can now add staging definitions to your app.json file and even overwrite configs for e.g. database integrations depending on the stage.
Here is an example:
Thanks a lot Felix Gnass for suggesting the ENV suggestion! 👏
Read the full docs here: Advanced/Staging.
Sometimes it can be quite daunting to go deep into the core code of the Jovo Framework in order to extend it for some desired custom functionality. This is why we worked on an easier way for you to write your own extensions for the framework. Meet: Jovo Plugins.
Here is an example for some custom logging features:
Read the full docs here: Advanced/Plugins.
When we asked experts about the difference between good and great voice apps, the ability to understand context was mentioned almost every time. Keeping track of what the user previously said can be quite tedious, though. The user context object is here to help you with that and allows you to store contextual information in a database. By default, it is storing the request and response of the previous interaction.
For example, this is how you can access the previous speech output:
This can be used for various things, for example if the user had trouble understanding and asks to repeat it. And by using the user context object, we can make this even easier now with the new repeat method:
Thanks a lot Brian Nichols for the suggestion and feedback! 👏
Read the full docs here: Data/User/Context.
Complex apps often require more than just one file of code. You can now add multiple handlers as objects. Here is an example:
Thanks a lot Mark Tucker for the suggestion and feedback! 👏
To make sure your app doesn’t break with any additional changes, unit tests are essential for any voice app project. With this release, we’re bringing our Jovo TestSuite into beta, an integrated solution to test both Alexa and Google Assistant requests.
We’re constantly testing this feature internally and with our community, and are working on documenting it thoroughly for you. Until then, take a look at this example repository on GitHub: milksnatcher/DefaultTests.
Thanks to everyone who helped with feature requests, feedback, and code contributions. It is great to see more people using the Jovo Framework to work on amazing projects. Feel free to join our Slack community and share what you’re working on!