IBM Watson™ Natural Language Understanding Service Ideas

Welcome to the IBM Watson™ Natural Language Understanding Service Ideas Portal


We welcome and appreciate your feedback on the IBM Watson™ Natural Language Understanding Service to help make it even better than it is today!


The ideas portal is for sharing ideas and feature requests with us that will make the IBM Watson™ Natural Language Understanding Service better. If you are looking for troubleshooting help or wondering how to use the service, please check the IBM Watson™ Natural Language Understanding Service documentation. Please do not use the Ideas Portal for reporting bugs - we ask that you report bugs or issues with the product by contacting IBM support.


Before you submit an idea, please perform a search first as a similar idea may have already been reported in the portal.


If a related idea is not yet listed, please create a new idea and include with it a description which includes expected behavior as well as why having this feature would improve the service and how it would address your use case.

Create a rest endpoint to duplicate WKS custom model instance.

Currently, we are unable to scale to more than 20 threads when using a WKS custom model for entity and relation prediction with the NLU service. Our documents are very large and one document can take up to 5 minutes to process. If we have multiple users and multiple documents, we cannot process more than one document at a time, or if we do, we need to share the 20 threads among these documents. IBM employees told us that the only way we could scale is to manually deploy WKS models to new NLU instances when our usage increases. So we either have the choice to deploy WKS instances manually when our usage increases, or always have a high number of WKS instances deployed at all time and pay 800$*number of instances/month  (even when we don't need it). An easy solution would be for you to provide and endpoint to duplicate a custom model. Either from Watson Knowledge Studio directly, or from the NLU Service. That way, we can handle the scaling on our side, and we don't need to hire an employee whose amazing job would be to deploy custom models manually.

  • Guest
  • Nov 13 2017
  • Attach files
  • Markus Müller commented
    01 Jul 19:26

    Any service that depends on NLU would benefit from programmatic model deployment to support automatic scale out.