Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
804 views
in Technique[技术] by (71.8m points)

algorithm - Fastest way to find most similar string to an input?

Given a query string Q of length N, and a list L of M sequences of length exactly N, what is the most efficient algorithm to find the string in L with the fewest mismatch positions to Q? For example:

Q = "ABCDEFG";
L = ["ABCCEFG", "AAAAAAA", "TTAGGGT", "ZYXWVUT"];
answer = L.query(Q);  # Returns "ABCCEFG"
answer2 = L.query("AAAATAA");  #Returns "AAAAAAA".

The obvious way is to scan every sequence in L, making the search take O(M * N). Is there any way to do this in sublinear time? I don't care if there's a large upfront cost to organizing L into some data structure because it will be queried a lot of times. Also, handling tied scores arbitrarily is fine.

Edit: To clarify, I am looking for the Hamming distance.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

All the answers except the one that mentions the best first algorithm are very much off. Locally sensitive hashing is basically dreaming. This is the first time I see answers so much off on stackoverflow.

First, this is a hard, but standard problem that has been solved many years ago in different ways.

One approach uses a trie such as the one preseted by Sedgewick here:

http://www.cs.princeton.edu/~rs/strings/

Sedgewick also has sample C code.

I quote from the paper titled "Fast Algorithms for Sorting and Searching Strings" by Bentley and Sedgewick:

"‘‘Near neighbor’’ queries locate all words within a given Hamming distance of a query word (for instance, code is distance 2 from soda). We give a new algorithm for near neighbor searching in strings, present a simple C implementation, and describe experiments on its efficiency."

A second approach is to use indexing. Split the strings into characters n-grams and index with inverted index (google for Lucene spell checker to see how it's done). Use the index to pull potential candidates and then run hamming distance or edit distnace on the candidates. This is the approach guaranteed to work best (and relatively simple).

A third appears in the area of speech recognition. There the query is a wav signal, and the database is a set of strings. There is a "table" that matches pieces of the signal to pieces of words. The goal is to find the best match of words to signal. This problem is known as word alignment.

In the problem posted, there is an implicit cost of matching query parts to database parts. For example one may have different costs for deletion/insertion/substitution and even different costs for mismatching say "ph" with "f".

The standard solution in speech recognition uses a dynamic programming approach which is made efficient via heuristics that direct pruning. In this way, only the best, say 50 candidates are kept. Thus, the name best-first search. In theory, you may not get the best match, but usually one gets a good match.

Here is a reference to the latter approach:

http://amta2010.amtaweb.org/AMTA/papers/2-02-KoehnSenellart.pdf

Fast Approximate String Matching with Suffix Arrays and A* Parsing.

This approach applies not only to words but to sentences.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...