Wednesday, 20 September 2017

Adding a speech interface to the Watson Conversation Service

The IBM Watson Conversation Service does a great job of providing an interface that closely resembles a conversation with a real human being. However, with the advent of products like the Amazon Echo, Microsoft Cortana and the Google Home, people increasingly prefer to interact with services by speaking rather than typing. Luckily IBM Watson also has Text to Speech and Speech to Text services. In this post we show how to hook these services together to provide a unified speech interface to Watson's capabilities.

In this blog we will build upon the existing SpeechToSpeech sample which takes text spoken in one language and then leverages Watson's machine translation service to speak it back to you in another language. You can try the application described here on Bluemix or access the code on GitHub to see how you can customise the code and/or deploy on your own server.

This application has only one page and it is quite simple from the user's point of view.
  • At the top there is some header text introducing the sample and telling users how to use it. 
  • The sample uses some browser audio interfaces that are only available in recent browser versions. If we detect that these features are not present we put up a message telling the user that they need to choose a more modern browser. Hopefully you won't ever see this message.
  • In the original sample there are two drop down selection boxes which allow you to specify the source and target language. We removed these drop downs since they are not relevant to our modified use case.
  • The next block of the UI gives the user a number of different ways to enter speech samples:
    • There is a button   which allows you to start capturing audio directly from the microphone. Whatever you say will be buffered and then passed directly to the transcription service. While capturing audio, the button changes colour to red and the icon changes  - this is a visual indication that recording is in progress. When you are finished talking, click the button again to stop audio capture.
    • If are working in a noisy environment or if you don't have a good quality microphone, it might be difficult for you to speak clearly to Watson. To help solve this problem we have provided you with some ample files hosted in the web app. To play one of these samples click on one of the buttons to play the associated file and use it as input.
    • If you have your own recording that you can click on the  button and select the file containing the audio input that you want to send to the speech-to-text service.
    • Last, but not least, you can drag and drop an audio file onto the page to have it instantly uploaded
  • The transcribed text is displayed on an input box (so you can see if Watson is hearing properly) and sent to either the translation service (in the original version) or the conversation service in our updated service. If there is a problem with the way your voice is being transcribed, see this previous article on how to improve it.
  • When we get a response from the conversation or translation service we place the received text on an output text box and we also call the text-to-speech service to read out the response and save you the bother of having to read.
I know that you want to understand what is going on under the covers so here is a brief overview:
  • The app.js file is the core of the web application. It implements the connections between the front end code that runs in the browser and the various Watson services. This involves establishing 3 back-end REST services. This indirection is needed because you don't want to include your service credentials in the code sent to the browser and because your browser's cross site script protections will prohibit you from making a direct call to the Watson service from your browser. The services are
    • /message - this REST service implements the interface to the Watson Conversation service. Every time we have a text utterance transcribed, we do a POST on this URL with a JSON payload like {"context":{...},"input":{"text":"<transcribed_text>"}}. The first time we call the service we specify an empty context {} and in each subsequent call we supply the context object that the server sent back to us the last time. This allows the server to keep track of the state of the conversation.
      Most conversation flows are programmed to give a trite greeting in response to the first message. To avoid spending time on this the client code sends initial blank message when the page loads to get this out of the way.
    • /synthesize - this REST service use used to convert the response into audio. All that this service does to convert a get on http://localhosts:3000/synthesize?voice=en-US_MichaelVoice&text=Some%20responsevoice=en-US_MichaelVoice&text=Some%20response into a get on the URL  https://watson-api-explorer.mybluemix.net/text-to-speech/api/v1/synthesize?accept=audio%2Fwav&voice=en-US_MichaelVoice&text=Some%20response this will return a .wav file with the text "some response" being spoken in US English by the voice "Michael". 
    • /token - the speech to text transcription is an exception to the normal rule that your browser shouldn't connect directly to the Watson service. For performance reasons we chose to use the websocket interface to the speech to text service. At page load time, the browser will do a GET on this /token REST service and it will respond with a token code that can then be included in the URL used to open the websocket. After this, all sound information captured from the microphone (or read from a sample file) is sent via the websocket directly from the browser to the Watson speech to text service.
  • The index.html file is the UI that the user sees. 
    • As well as defining the main UI elements which appear on the page, it also  includes main.js which is the client side code that handles all interaction in your browser.
    • It also includes the JQuery and Bootstrap modules. But I won't cover these in detail.
  • You might want to have a closer look at the client side code which is contained in a file public/js/main.js:
    • The first 260 lines of code are concerned with how to capture audio from the client's microphone (if the user allows it - there are tight controls on when/if browser applications are allowed to capture audio). Some of the complexity of this code is due to the different ways that different browsers deal with audio. Hopefully it will become easier in the future. 
    • Regardless of what quality audio your computer is capable of tracking, we down sample it to 16bit, mono at 16 Khz because this is what the speech recognition is expecting.
    • Next we declare which language model we want to use for speech recognition. We have hardcoded this to a model named "en-GB_BroadbandModel" which is a model tuned to work with high fidelity captures of of speakers of UK English (sadly there is no language model available for Irish English). However, we have left in a few other language models commented out to make it easy for you if you want to change to another language. Consult the Watson documentation for a full list of language models available.
    • The handleFileUpload function deals with file uploads. Either file uploads which happen as a result of explicitly clicking on the "Select File" button or upload that happen as a result of a drag-and-drop event.
    • The initSocket function manages with the interface to the websicket that we use to communicate to/from the speech_to_text service. It declares that the showResult function should be called when a response is received. Since it is not always clear when a spaker is finnished talking, the text-to-speech can return several times. As a result the msg.results[0].final variable is used to deremine if the current transcription is final. If it is an intermediate result, we just update the resultsText field with what we heard. If it is the final result, the msg.results[0].alternatives[0].transcript variable is also used as the most likely transcription of what the user said and it is passed on to the converse function.
    • The converse function handles sending the detected text to the Watson Conversation Service (WCS) via the /message REST interface which was descibed above. When the service gives a response to the question, we pass it to the text-to-speech service via the TTS function and we write it on the response textarea so it can be read as well as listened to.
  • In addition there are many other files which control the look and feel of the web page, but won't be described in detail here e.g. 
    • Style sheets in the /public/css directory
    • Audio sample files in the /public/audio directory
    •  Images in the public/images directory
    • etc.
Anyone with a knowledge of how web applications work, should be able to figure out how it works. If you have any trouble, post your question as a comment on this blog.
At the time of writing, there is an instance of this application running at https://speak-to-watson-app.au-syd.mybluemix.net/ so you can see it running even if you are having trouble with your local deployment. However, I can't guarantee that this instance will stay running due to limits on mypersonal Bluemix account.

Friday, 15 September 2017

Translating a Chatbot

You have a trained chatbot built in English and your boss wants it working in German next week. Here is what I would do.
Tell her it is impossible. Building a new chatbot in a different language involves starting the whole process from scratch. Linguistic, cultural and company process reasons means a translated chatbot won't work.
Then I would build it anyway.
Create a language translation service in Bluemix. Note the username and password this service has.
Get the codes of the language you want to translate between. From English 'en' in Sept 2017 you can translate to Arabic 'ar', Brazilian Portuguese 'pt', French 'fr', German 'de', Italian 'it', Japanese 'ja', Korean 'ko', and Spanish 'es'.
Take your current Ground truth of Questions, Intentions. Put the Questions and intents in a spreadsheet. Sort the list in intentions. So that all the questions about a topic are together.
Now the python code to translate the questions into German is below. You need to use the username and password you set up earlier.

import json
from watson_developer_cloud import LanguageTranslatorV2 as LanguageTranslator
import csv

language_translator = LanguageTranslator(
  username= "",
  password= "")


text= ''
with open('myfile.csv', newline='') as csvfile:
    spamreader = csv.reader(csvfile, delimiter=',', quotechar='|')
    for row in spamreader:
        print(row[0], end=',')
        translation = language_translator.translate(text=row[0],
        source='en',
        target='de')
        a=(json.dumps(translation, indent=2, ensure_ascii=False))
        print(a.strip('\"'))
myfile.csv here is
Am I allowed to speak in german,
Do you speak German?,
Do you speak English?,
What languages can I talk to you in?,
And this gives the output
davids-mbp:translate davidcur$ python translate.py
Am I allowed to speak in german,Bin ich konnte in Deutsch zu sprechen
Do you speak German?,Möchten Sie Deutsch sprechen?
Do you speak English?,Wollen Sie Englisch sprechen?
What languages can I talk to you in?,Welche Sprachen kann ich zu Ihnen sprechen?
Certain phrases and words you will want to translate in a non standard way. If the chatbot talks about a company called 'Exchange' you will want to warn the translator that it should not translate that into the German word for exchange. To do this you load a glossary file into your translator. glossary.tmx looks like this
and then run code to tell your translation service to use this glossary. In node.js this is

var watson = require('watson-developer-cloud');
var fs = require('fs');

var language_translator = watson.language_translator({
    version: 'v2',
  url: "https://gateway.watsonplatform.net/language-translator/api",
  username: "",
  password: ""
});

var params = {
  name: 'custom-english-to-german',
  base_model_id: 'en-de',
  forced_glossary: fs.createReadStream('glossary.tmx')
};

language_translator.createModel(params,
  function(err, model) {
    if (err)
      console.log('error:', err);
    else
      console.log(JSON.stringify(model, null, 2));
  }
);

The terms in your glossary are likely to be in your Entities as most of your common proper nouns, industry terms and abbreviations end up in there.
Now get someone who speaks both English and the Target language fluently. Stick these translated questions in the spreadsheet. Go through each question and humanify the translation. This is a pretty quick process. They will occasionally come across words that will have to be added to the glossary.tmx file and the translations rerun.
At the end of this you have an attempt at a ground truth in another language for a chatbot. There are several reasons why this wont be perfect. Germans do not speak like translated British people. They have different weather, culture and laws. And the differences between Germany and Britain are smaller than between many countries.
Their questions are likely to be different. But as a first cut to get a chatbot up to the point where it might recognise a fair chunk of what people are saying. This can at least be used to help you gather new questions or to help you classify new questions you collect from actual Germans.
Manufacturing questions never really works well. As Simon pointed out here. And translation of questions is close to that. But it can be a useful tool to quickly get a chatbot up to the point where it can be bootstrapped into a full system.












Tuesday, 5 September 2017

Watson Speech to Text with Node.js

Talking to a computer makes you feel like your living in the future. In this post I will show how using node.js to expand on the Watson Speech to Text (STT) example to improve the accuracy of the transcription.
As a base I will take the cognitive car demo and try write a STT system for that. Once Speech to text can correctly interpret what a person asks the car we will want to send those commands to The Watson Conversation Service Car demo. But I will deal with connecting STT to WCS in another post.
Log into bluemix and set up a STT service. Create new credentials and save them as you will use them in all the following programs. This username and password are used everywhere below.
Next assuming you already have node installed you also need the watson developer cloud SDK
npm install watson-developer-cloud --save
and the speech-to-text-utils
npm install watson-speech-to-text-utils -g

Get an audio file of the sorts of things you would say to this Car demo. I asked questions like the ones below

turn on the lights
where is the nearest restaurant
wipe the windscreen
When is the next petrol station
When is the next gas station
does the next gas station have rest rooms
How far are the next rest rooms
please play pop music
play rock music
play country music


I have made an audio file of that (I had a cold). You can get it here or record your own version.
In speech_to_text.v1.js in the sdk examples file put in the username and password you got in 1. above. And point at the sound file of commands from 3.


const speech_to_text = new SpeechToTextV1({
  username: 'INSERT YOUR USERNAME FOR THE SERVICE HERE',
  password: 'INSERT YOUR PASSWORD FOR THE SERVICE HERE'
});

fs.createReadStream(__dirname + '/DavidCarDemo.wav').pipe(recognizeStream);

Now look at transcription.txt. We want to improve the accuracy of this transcription. Now create a model that we will train so that the STT understands our speech better.

Running node createmodel.js on this code gives you a custom_id needed from here on.

{ "customization_id": "06da5480-915c-11e7-bed0-ef2634fd8461" }
5. If we tell STT to take in the car demo corpus. It learns from this the sort of words to expect the user to say to the system
watson-speech-to-text-utils set-credentials
and give it the username and password same as above.
watson-speech-to-text-utils customization-add-corpus
Will ask you for a conversation workspace. Give it car_workspace.json downloaded from the car demo. Adding a conversation workspace as a corpus improves accuracy of the words that are unusually common in our conversation.
Now we want to improve accuracy on words that are not common in the corpus. For example "Does the next" is heard as 'dostinex" currently.
watson-speech-to-text-utils corpus-add-words -i 06da5480-915c-11e7-bed0-ef2634fd8461
and give words.json which looks like


{
  "words": [{
    "display_as": "could",
    "sounds_like": [ "could" ],
    "word": "culd"
  }, {
    "display_as": "closeby",
    "sounds_like": [ "closeby" ],
    "word": "closeby"
  }, {
    "display_as": "does the next",
    "sounds_like": [ "does the next", "dostinex" ],
    "word": "does_the_next"
  }
]
}
The speech utils allow you to see what words you have in your model. Words added to not overwrite old ones so sometimes you need to delete old values. watson-speech-to-text-utils customization-list-words -i 06da5480-915c-11e7-bed0-ef2634fd8461
watson-speech-to-text-utils corpus-delete-word -i 06da5480-915c-11e7-bed0-ef2634fd8461 -w "word to remove"

Finally in transcribe.js you need to tell it to use the model we have just made


const params = {
  content_type: 'audio/wav',
  customization_id: '06da5480-915c-11e7-bed0-ef2634fd8461'
};
The code I have used is up on github here.
Now we get a more accurate transcription of the voice commands. By training on the Conversation corpus. Then fixing other common errors using a words.json file.