Jovo Language Model

In this section, you will learn more about the Jovo Language Model, found in the /models folder of your project. It can be used to created platform specific language models with the Jovo CLI.


The Jovo Language Model allows you to maintain only a single file, which can be used to create the platform language models with the help of the Jovo CLI.

You can find the language model files in the /models folder of your Jovo project:

Models Folder in a Jovo Project

For the platform specific nlu (natural language understanding), we currently support built-in alexa for Alexa Skills, and dialogflow for Google Actions. These are referenced in the app.json file after the platforms are initialized with jovo init <platform>. To learn more about how the resulting platform models look like, please read App Configuration > Models > Platforms.

Every language you choose to support will have its very own language model (en-US, de-DE, etc.). Overall the Jovo Language Model is similar to the Alexa interaction model with some small changes here and there.

For example, the en-US.json in the Jovo Sample Voice App looks like this:

Let's go through the specific elements in detail.

Language Model Elements

The Jovo Language Model consists of several elements, which we will go through step by step in this section:

For platform specific language model elements, take a look at the sections below:


The invocation is the first element of the Jovo Language Model. It sets the invocation name of your voice application (the one people are using to talk to your voice app, see Getting Started > Voice App Basics for more information).

Please note: The invocation element is currently only exported to Alexa Skills. Invocation names for Google Actions have to be set in the Actions on Google developer console.


Intents can be added to the JSON as objects that include a name, sample phrases, and inputs (optional):

Intent Name

The name specifies how the intent is called on the platforms. We recommend using a consistent standard. In our examples, we Intent to each name, like MyNameIsIntent.


This is an array of example sentences, or, phrases, which will be used to train the language model on Alexa and Dialogflow. This is the equivalent to utterances or "user says" on the respective developer platforms.


While defining your inputs (slots on Alexa and entities in Dialogflow) you can choose to either provide seperate input types for each platform, or define your own input type:

In the upper part of the example above, for the name input, we distinguish between input types for alexa and dialogflow. Learn more about their built-in input types here:

In the lower part, we reference a new input type called myCityInputType, which we need to define outside the intents array of the overall model.

You also can manage your input as a list by specifying the parameter isList for the dialogflow platform. It is not necessary to add this parameter if your input is not a list.

Input Types

The inputTypes array is the place where you can define your own input types and provide a name, values, and synonyms (optional).

Input Type Name

The name specifies how the input type is referenced. Again, we recommend to use a consistent style throughout all input types to keep it organized.


This is an array of elements that each contain a value and optionally synonyms. With the values, you can define which inputs you're expecting from the user.


Sometimes different words have the same meaning. In the example above, we have a main value New York and a synonym New York City.

To learn more about how these input values and synonyms can be accessed, take a look at App Logic > Data.

Allow automated expansion

On Dialogflow you can add a specific parameter automatedExpansion to allow automated expansion like :

Platform Specific Elements

If you only want to use certain features for one of the platforms, you can also add objects for their natural language understanding tools (nlu) to the model.

For Alexa Skills, Jovo currently supports the built-in NLU (natural language understanding) alexa, while for Google Assistant, dialogflow is supported.


Some of the features Alexa provides have to be implemented separately in the alexa nlu section.

Here are some examples:

  • Built-in intents and slots (the ones with AMAZON. prepended to their names)
  • Other Alexa-specific features like the Dialog Interface

This is how it looks like:

If you don't have this object in your language model, the jovo init command will automatically append it with all the built-in intents required by Amazon.

The alexa object contains the interactionModel in its original syntax. For example, you can go to the Code Editor in the Skill Builder (beta) and copy-paste the stuff that you need into this part of the Jovo Language Model file.


There are two ways you can add dialogflow specific elements:

  • Add options to Jovo Language Model intents
  • Add intents and entities to dialogflow element

You can add options to Jovo intents like this:

In the above example, you can see that you can add specific elements like a priority to an intent.

The priority can have the following value :

Definition Value Color
Highest 1000000 Red
High 750000 Orange
Normal 500000 Blue
Low 250000 Green
Ignore 0 Grey

Similar to the alexa element, you can also add dialogflow specific intents and entities to the language model.

If you don't have this object in your language model, the jovo init command will automatically append it with all the intents required by Dialogflow.

The dialogflow object contains the agent data in its original syntax. For example, you export your Dialoglow Agent, look at the filex, and copy-paste the stuff that you need into this part of the Jovo Language Model file.

Comments and Questions

Any specific questions? Just drop them below. Alternatively, you can also fill out this feedback form. Thank you!

Join Our Newsletter

Be the first to get our free tutorials, courses, and other resources for voice app developers.