Quantum Thursday Q&As with Dominic Widdows Quantum Thursday Q&As with Dominic Widdows

Quantum Thursday Q&As with Dominic Widdows

Dominic Widdows, Senior Principal NLP Scientist at LivePerson, answers your questions from the first edition of Quantum.Tech's Quantum Thursdays, July 30.

What are the latest developments of NLP in the quantum realm?

I would look particularly at the DisCoCat work, e.g., Meichanetzidis_etal.pdf (from SemSpace 2020), and if you’re interested in more, investigate other presentations from the QNLP2019 conference.

What kinds of possibilities do you see for using quantum computing to analyse molecules for drug discovery and material science?

In broad strokes, analyzing molecules themselves, and analyzing literature about them. On the topic of molecular simulation, I’m not a biochemist and I don’t know enough to say confidently more than the generic “it’s a problem with comparatively small input / output and enormous combinatoric complexity, which is potentially a good fit for quantum computing”. Beyond that, I’d start reading and following links from articles like Physics - Waiting for the Quantum Simulation Revolution.

On literature-based discovery, I’d start by noting that it’s a long-standing problem with many interesting approaches - one of the more quantum-like is Discovering discovery patterns with predication-based Semantic Indexing.

How do you design a Quantum gate with magnetic fields interactions?

I don’t know, good question.

Do you think the grammar approach of DisCo will be scalable in the near future? I feel that English might be a bit too irregular to take that approach. What is your opinion?

For scalability, getting the sentences parsed in a categorial grammar is perhaps still a concern, and I’m excited that quantum computing might help with this too (see for example Wiebe et al.) I hope to investigate the relative scalability of an explicit grammatical composition model like DisCoCat with a more implicit model such as ELMo that claims to capture such information in hidden layers in a neural network. Some of those methods are pretty heavyweight as well - so while scalability is still a concern, it might be that DisCoCat is not especially off-the-charts here. This is an issue to follow in the immediate future I hope.

Can we use visualization, that is, image processing at the front end and at the back end natural language processing with both as a tensor?

It seems to be an open question about whether there are any good ways to visualize higher order tensors - even the traditional rank 2 tensors in 3 dimensions (such as the stress and strain tensors in mechanical engineering) are hard to visualize. When I’ve worked in the past on visualizing word relationships, we’ve taken the approach of showing the words themselves, rather than visualizing more abstract mathematical structures. This isn’t an especially good answer to your question, it just explains why I haven’t really tried very hard.

How can a student, good in language understanding contribute?

Obvious suggestions start with learning at least one programming language (at the moment I’d recommend python), and learning about some of the most standard NLP tools, packages, datasets, and how they work in different languages. Depending on which languages you’re interested in, there is a lot of opportunity to contribute by investigating just how well techniques developed for (say) English or Chinese work with other widely-used but less-well-supported languages (like Hindi, Indonesian, Arabic, Portuguese). It’s easier than people sometimes assume to find serious problems - for example, using slang text messages or tweets as input for a machine translation system can quickly reveal weaknesses, especially with some of the above languages. Once you’ve learned a few tools and applied them to a few situations and seen some of what they do well and what they do badly, you’re in a much better situations to see what sort of problems might be interesting to try to fix, and to have intuitions about what fixes you might try.

How do you accommodate searches where people type in tune of song they vaguely remember?

I’m not familiar with such interfaces in practice - I haven’t even used a non-paper version of the famous Dictionary of Musical Themes. Sorry I can’t give a better answer on this.


For the full deep dive into using NLP with Quantum, watch Dominic's presentation at the first Quantum Thursday here: Which Language Operations to Implement First with Quantum Computers?