Hello all.
I am truly new to computer vision field, but it is fascinating me!
I have now a challenge in my hands and I am seeking for mentors/advisers to give me some follow up.
My project is:
From a picture of a video-game cover, search that pic through a video-game cover database, and if there is a very good match the app will return a string with the name of the videogame and the platform.
Problem example:
1 -Take a photo of a cover similar to this: http://i.imgur.com/gpZMbRm.jpg
2 - Cover match to this in the database: http://i.imgur.com/WXhPwf8.png
3 - App gives string: "Fifa 12 Playstation 2"
In my preliminary research I found that I should save in my cover database, the name of the game, the platform, the URL for the cover, and the image features (keypoints and descriptors).
I am using SURF features detector/extractor. My first trials output is something like this: http://i.imgur.com/Rj9lcBr.png .
There are some concepts that I am still confused about...
I am not looking for similarity right? I just need to observe if there are some sort of good keypoints-paired/matched, right?
Because in the examples images mentioned before I get "img1 - 1087 features (query image), img2 - 1755 features - 30 % - SIMILARITY - 321/333 inliers/matched"
What are the inliers?
My calculation of similarity seems wrong to me... I would say that these two images are like 70 % look alike...
ps: I am using Python so...
Thanks for your time and help.
Sorry If I could not explain any better my problems/concerns.
↧