Jovo Language Model

In this section, you will learn more about the Jovo Language Model, found in the /models folder of your project. It can be used to created platform specific language models with the Jovo CLI.


Note: The Jovo Language Model is currently only supported for Alexa Skills and Google Actions!

The Jovo Language Model allows you to maintain only a single file, which can be used to create the platform language models with the help of the Jovo CLI.

You can find the language model files in the /models folder of your Jovo project:

Models Folder in a Jovo Project

For the platform specific nlu (natural language understanding), we currently support built-in alexa for Alexa Skills, and dialogflow for Google Actions. These are referenced in the project.js file. To learn more about how the resulting platform models look like, please read Models > Platforms.

Every language you choose to support will have its very own language model (en-US, de-DE, etc.).

For example, the en-US.json in the Jovo Sample Voice App looks like this:

Let's go through the specific elements in detail.

Language Model Elements

The Jovo Language Model consists of several elements, which we will go through step by step in this section:

For platform specific language model elements, take a look at the sections below:


The invocation is the first element of the Jovo Language Model. It sets the invocation name of your voice application (the one people are using to talk to your voice app, see Voice App Basics for more information).

Note: The invocation element is currently only exported to Alexa Skills. Invocation names for Google Actions have to be set in the Actions on Google developer console.


Intents can be added to the JSON as objects that include:

This is how the MyNameIsIntent from the Jovo "Hello World" sample app looks like:

Intent Name

The name specifies how the intent is called on the platforms. We recommend using a consistent standard. In our examples, we add Intent to each name, like MyNameIsIntent.


This is an array of example sentences, or, phrases, which will be used to train the language model on Alexa and Dialogflow. This is the equivalent to utterances or "user says" on the respective developer platforms.


While defining your inputs (slots on Alexa and entities in Dialogflow) you can choose to either provide seperate input types for each platform, or define your own input type:

In the upper part of the example above, for the name input, we distinguish between input types for alexa and dialogflow. Learn more about their built-in input types here:

In the lower part, we reference a new input type called myCityInputType, which we need to define outside the intents array of the overall model.

Platform-specific Additions to Intents

You also can manage your input as a list by specifying the parameter isList for the dialogflow platform. It is not necessary to add this parameter if your input is not a list.

Input Types

The inputTypes array is the place where you can define your own input types and provide:

Input Type Name

The name specifies how the input type is referenced. Again, we recommend to use a consistent style throughout all input types to keep it organized.


This is an array of elements that each contain a value and optionally synonyms. With the values, you can define which inputs you're expecting from the user.


Sometimes different words have the same meaning. In the example above, we have a main value New York and a synonym New York City.

To learn more about how these input values and synonyms can be accessed, take a look at Routing > Input.

Platform-specific Additions to Input Types

On Dialogflow you can add a specific parameter automatedExpansion to allow automated expansion like :

Platform Specific Elements

If you only want to use certain features for one of the platforms, you can also add objects for their natural language understanding tools (nlu) to the model.

For Alexa Skills, Jovo currently supports the built-in NLU (natural language understanding) alexa, while for Google Assistant, dialogflow is supported.


There are two ways you can add dialogflow specific elements:

  • Add options to Jovo Language Model intents
  • Add intents and entities to the dialogflow element

You can add options to Jovo intents like this:

In the above example, you can see that you can add specific elements like a priority to an intent.

The priority can have the following value :

Definition Value Color
Highest 1000000 Red
High 750000 Orange
Normal 500000 Blue
Low 250000 Green
Ignore 0 Grey

Similar to the alexa element, you can also add dialogflow specific intents and entities to the language model.

The dialogflow object contains the agent data in its original syntax. For example, you export your Dialoglow Agent, look at the files, and copy-paste the stuff that you need into this part of the Jovo Language Model file.

Comments and Questions

Any specific questions? Just drop them below or join the Jovo Community Forum.

Join Our Newsletter

Be the first to get our free tutorials, courses, and other resources for voice app developers.