artist.GetSimilar numerical consistency

 
    • mwalimu said...
    • User
    • 27 Mar 2012, 02:06

    artist.GetSimilar numerical consistency

    Playing around with artist.GetSimilar this evening, it appears there is some numerical inconsistency between different artists in how the returned match value is calculated.

    It looks as if for any given artist, the most similar artist gets a similarity value of 1 by definition, and all others are given values based on how their similarity compares to that most similar artist.

    This means that if some artist A has a very high correlation to artist B, no one else comes close, then on A's list of most similar artists, B gets a 1 and the next closest artists might have values well below that such as .42, .39, .38, and so forth.

    Artist C, on the other hand, doesn't have any "most similar by far" artist but instead has a dozen or more fairly similar artists. Obviously, one of these will be the most similar and get a score of 1, but some of the others might be close behind, with scores for the next three perhaps looking something like .92, .89, .85, etc.

    Now it's quite possible that by some objective measurement of similarity, the second-closest artist to artist A, the one who scored a .42, is actually closer correlated to artist A than any other artist is to artist C, even though those artists considered most similar to artist C got higher scores. But you can't tell that from the numbers because of the way they're skewed by Artist A having one very close artist and Artist C not having one.

    Is there any way to offset or otherwise account for/eliminate this "skewing effect" and get a more objective measurement of how similar two artists are? In other words, if artists X and Y get the same similarity score as artists W and Z, how can I know that they're objectively comparable and not dependent on whether or not any of these artists has some other "most similar by far" artist?

    mwalimu
    • dunk said...
    • Alumni
    • 29 Mar 2012, 22:22
    I'll pass this along to the MIR team, though the answer is likely "because of this giant mathematical formula, say hello to some advanced probability theory."

  • Hi mwalimu,

    You're quite right that the scores you see can't be used to express similarity between arbitrary pairs of artists. It's certainly possible in principle for us to generate normalised scores that would allow you to say "artists A and B are 2.5 times more similar than artists C and D" but I'm afraid that we don't.

    You might be able to do this yourself by looking at the distribution of scores for a large number of artists, and then using some assumptions to reweight the scores for a particular artist based on how their score distribution relates to your global model. But it would be tricky and still might not work. If you're really determined, and have some scientific background, you can find some pointers here. Otherwise I'm afraid you're stuck.

    • mwalimu said...
    • User
    • 30 Mar 2012, 16:42
    I doubt this qualifies as advanced probability theory, but I did have a couple of semesters of college level probability and statistics classes, and I still remember what a correlation coefficient is.

    My intent was to try my hand at developing my own artist recommendation app based on the a user's favorite artists, using the artist.GetSimilar method on that user's favorites. When I started looking at the results returned for several artists I discovered the skewing described in my original post. One reasonable attempt at offsetting it would be to add up the scores of the top X similar artists and calculate an adjustment factor based on how that compares an average of that figure over a large number of artists. While it wouldn't be as accurate as a true correlation coefficient, it would at least be an improvement over using the skewed results unaltered.

    mwalimu
Anonymous users may not post messages. Please log in or create an account to post in the forums.