Augmented reality2009.09.23 11:04
from http://opencv.willowgarage.com/wiki/Welcome

>>> New functionality, features: <<<

- General:
* The brand-new C++ interface for most of OpenCV functionality
(cxcore, cv, highgui) has been introduced.
Generally it means that you will need to do less coding to achieve the same results;
it brings automatic memory management and many other advantages.
See the C++ Reference section in opencv/doc/opencv.pdf and opencv/include/opencv/*.hpp.
The previous interface is retained and still supported.

* The source directory structure has been reogranized; now all the external headers are placed
in the single directory on all platforms.

* The primary build system is CMake, http://www.cmake.org (2.6.x is the preferable version).
+ In Windows package the project files for Visual Studio, makefiles for MSVC,
Borland C++ or MinGW are note supplied anymore; please, generate them using CMake.

+ In MacOSX the users can generate project files for Xcode.

+ In Linux and any other platform the users can generate project files for
cross-platform IDEs, such as Eclipse or Code Blocks,
or makefiles for building OpenCV from a command line.

* OpenCV repository has been converted to Subversion, hosted at SourceForge:
http://opencvlibrary.svn.sourceforge.net/svnroot/opencvlibrary
where the very latest snapshot is at
http://opencvlibrary.svn.sourceforge.net/svnroot/opencvlibrary/trunk,
and the more or less stable version can be found at
http://opencvlibrary.svn.sourceforge.net/svnroot/opencvlibrary/tags/latest_tested_snapshot

- CXCORE, CV, CVAUX:

* CXCORE now uses Lapack (CLapack 3.1.1.1 in OpenCV 2.0) in its various linear algebra functions
(such as solve, invert, SVD, determinant, eigen etc.) and the corresponding old-style functions
(cvSolve, cvInvert etc.)

* Lots of new feature and object detectors and descriptors have been added
(there is no documentation on them yet), see cv.hpp and cvaux.hpp:

+ FAST - the fast corner detector, submitted by Edward Rosten

+ MSER - maximally stable extremal regions, submitted by Liu Liu

+ LDetector - fast circle-based feature detector by V. Lepetit (a.k.a. YAPE)

+ Fern-based point classifier and the planar object detector -
based on the works by M. Ozuysal and V. Lepetit

+ One-way descriptor - a powerful PCA-based feature descriptor,
(S. Hinterstoisser, O. Kutter, N. Navab, P. Fua, and V. Lepetit,
"Real-Time Learning of Accurate Patch Rectification").
Contributed by Victor Eruhimov

+ Spin Images 3D feature descriptor - based on the A. Johnson PhD thesis;
implemented by Anatoly Baksheev

+ Self-similarity features - contributed by Rainer Leinhart

+ HOG people and object detector - the reimplementation of Navneet Dalal framework
(http://pascal.inrialpes.fr/soft/olt/). Currently, only the detection part is ported,
but it is fully compatible with the original training code.
See cvaux.hpp and opencv/samples/c/peopledetect.cpp.

+ Extended variant of the Haar feature-based object detector - implemented by Maria Dimashova.
It now supports Haar features and LBPs (local binary patterns);
other features can be more or less easily added

+ Adaptive skin detector and the fuzzy meanshift tracker - contributed by Farhad Dadgostar,
see cvaux.hpp and opencv/samples/c/adaptiveskindetector.cpp

* The new traincascade application complementing the new-style HAAR+LBP object detector has been added.
See opencv/apps/traincascade.

* The powerful library for approximate nearest neighbor search FLANN by Marius Muja
is now shipped with OpenCV, and the OpenCV-style interface to the library
is included into cxcore. See cxcore.hpp and opencv/samples/c/find_obj.cpp

* The bundle adjustment engine has been contributed by PhaseSpace; see cvaux.hpp

* Added dense optical flow estimation function (based on the paper
"Two-Frame Motion Estimation Based on Polynomial Expansion" by G. Farnerback).
See cv::calcOpticalFlowFarneback and the C++ documentation

* Image warping operations (resize, remap, warpAffine, warpPerspective)
now all support bicubic and Lanczos interpolation.

* Most of the new linear and non-linear filtering operations (filter2D, sepFilter2D, erode, dilate ...)
support arbitrary border modes and can use the valid image pixels outside of the ROI
(i.e. the ROIs are not "isolated" anymore), see the C++ documentation.

* The data can now be saved to and loaded from GZIP-compressed XML/YML files, e.g.:
cvSave("a.xml.gz", my_huge_matrix);

- MLL:
* Added the Extremely Random Trees that train super-fast,
comparing to Boosting or Random Trees (by Maria Dimashova).

* The decision tree engine and based on it classes
(Decision Tree itself, Boost, Random Trees)
have been reworked and now:
+ they consume much less memory (up to 200% savings)
+ the training can be run in multiple threads (when OpenCV is built with OpenMP support)
+ the boosting classification on numerical variables is especially
fast because of the specialized low-overhead branch.

* mltest has been added. While far from being complete,
it contains correctness tests for some of the MLL classes.

- HighGUI:
* [Linux] The support for stereo cameras (currently Videre only) has been added.
There is now uniform interface for capturing video from two-, three- ... n-head cameras.

* Images can now be compressed to or decompressed from buffers in the memory,
see the C++ HighGUI reference manual

- Documentation:
* The reference manual has been converted from HTML to LaTeX (by James Bowman and Caroline Pantofaru),
so there is now:
+ opencv.pdf for reading offline
+ and the online up-to-date documentation
(as the result of LaTeX->Sphinx->HTML conversion) available at
http://opencv.willowgarage.com/documentation/index.html

- Samples, misc.:
* Better eye detector has been contributed by Shiqi Yu,
see opencv/data/haarcascades/*[lefteye|righteye]*.xml
* sample LBP cascade for the frontal face detection
has been created by Maria Dimashova,
see opencv/data/lbpcascades/lbpcascade_frontalface.xml
* Several high-quality body parts and facial feature detectors
have been contributed by Modesto Castrillon-Santana,
see opencv/data/haarcascades/haarcascade_mcs*.xml

>>> Optimization:
* Many of the basic functions and the image processing operations
(like arithmetic operations, geometric image transformations, filtering etc.)
have got SSE2 optimization, so they are several times faster.

- The model of IPP support has been changed. Now IPP is supposed to be
detected by CMake at the configuration stage and linked against OpenCV.
(In the beta it is not implemented yet though).

* PNG encoder performance improved by factor of 4 by tuning the parameters

>>> Bug fixes: <<<
TBD
(see http://sourceforge.net/tracker/?group_id=22870&atid=376677 of the list
of the closed and still opened bugs).

Many thanks to everybody who submitted bug reports and/or provided the patches!

>>> Known issues:
* configure+autotools based build is currently broken.
Please, use CMake.
* OpenCV bug tracker at SF still lists about 150 open bugs.
Some of them may be actually fixed already, and most of the remaining bugs
are going to be fixed by OpenCV 2.0 gold.
* IPP is not supported. As the new OpenCV includes a lot of SSE2 code,
it may be not such a serious problem, though.
The support (at least for most important functions that do not have
SSE2 optimization) will be returned in 2.0 gold.
* The documentation has been updated and improved a lot, but it still
needs quite a bit of work:
- some of the new functionality in cvaux is not described yet.
- the bibliography part is broken
- there are quite a few known bugs and typos there
- many of the hyperlinks are not working.
* The existing tests partly cover the new functionality
(via the old backward-compatibility OpenCV 1.x API), but the coverage is
not sufficient of course.
* The new-style Python interface is not included yet

Many of the problems will be addressed in 2.0 gold.
If you have found some specific problem, please, put the record to the bug tracker:
http://sourceforge.net/tracker/?group_id=22870
Better if the bug reports will include a small code sample in C++/python +
all the necessary data files needed to reproduce the problem.
신고
Posted by myditto
TAG opencv
search2009.06.05 13:45
신고
Posted by myditto
search2009.04.22 15:57
search2009.03.03 11:35
from http://durl.kr/bnz
by

Leo Breiman and Adele Cutler


RF is an example of a tool that is useful in doing analyses of scientific data.
But the cleverest algorithms are no substitute for human intelligence and knowledge of the data in the problem.
Take the output of random forests not as absolute truth, but as smart computer generated guesses that may be helpful in leading to a deeper understanding of the problem.

Overview

We assume that the user knows about the construction of single classification trees. Random Forests grows many classification trees. To classify a new object from an input vector, put the input vector down each of the trees in the forest. Each tree gives a classification, and we say the tree "votes" for that class. The forest chooses the classification having the most votes (over all the trees in the forest).

Each tree is grown as follows:

  1. If the number of cases in the training set is N, sample N cases at random - but with replacement, from the original data. This sample will be the training set for growing the tree.
  2. If there are M input variables, a number m<<M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing.
  3. Each tree is grown to the largest extent possible. There is no pruning.

In the original paper on random forests, it was shown that the forest error rate depends on two things:

  • The correlation between any two trees in the forest. Increasing the correlation increases the forest error rate.
  • The strength of each individual tree in the forest. A tree with a low error rate is a strong classifier. Increasing the strength of the individual trees decreases the forest error rate.

Reducing m reduces both the correlation and the strength. Increasing it increases both. Somewhere in between is an "optimal" range of m - usually quite wide. Using the oob error rate (see below) a value of m in the range can quickly be found. This is the only adjustable parameter to which random forests is somewhat sensitive.

Features of Random Forests

  • It is unexcelled in accuracy among current algorithms.
  • It runs efficiently on large data bases.
  • It can handle thousands of input variables without variable deletion.
  • It gives estimates of what variables are important in the classification.
  • It generates an internal unbiased estimate of the generalization error as the forest building progresses.
  • It has an effective method for estimating missing data and maintains accuracy when a large proportion of the data are missing.
  • It has methods for balancing error in class population unbalanced data sets.
  • Generated forests can be saved for future use on other data.
  • Prototypes are computed that give information about the relation between the variables and the classification.
  • It computes proximities between pairs of cases that can be used in clustering, locating outliers, or (by scaling) give interesting views of the data.
  • The capabilities of the above can be extended to unlabeled data, leading to unsupervised clustering, data views and outlier detection.
  • It offers an experimental method for detecting variable interactions.

저작자 표시 비영리
신고
Posted by myditto
search2009.02.16 15:12
While an overly complex system may allow perfect classification of the training samples, it is unlikely perform well on new patterns. This situation is Known as overfitting. One of the most important areas of research in statistical pattern classification is determining how to adjust the complexity of the model - not so simple that it can not explain the difference between the categories, yet not so complex as to give poor classification on novel patterns. Are there principled methods for finding the best complexity for a classifier?

from  prof. Richard Duda's book 
신고
Posted by myditto
search2009.01.22 15:50

from : http://www.promotionworld.com/news/editors/080317MobileVisualSearch.html

What is the future of mobile internet usage?


March 17, 2008


The world of Mobile search is evolving extremely fast. Beginning with keyword-based search, going through the next step - voice search, now the end user is offered to send a photo by his cell phone in order to find relevant to his photo’s query information in Internet.

Mobile Search is a developing branch that allows users to find mobile content interactively on mobile websites. With the years, mobile content has changed its media direction towards mobile multimedia. Nevertheless, mobile search is not just a simple shift of PC web search to mobile equipment, but it is connected to specialized segments of mobile broadband and mobile content, both of which have been fast-paced evolving recently.

The major search engines are aggressively trying to create applications and relationships in order to take advantage of a mobile ad market. According to a leading market research firm eMarketer, strong competition for the US mobile search market might be anticipated, having in mind the large US online ad market and strong pushes by portals. By 2011, mobile search is expected to account for around $715 million.

The Mobile directory search industry is almost as old as the telecom and offers services that enable people by entering a word or phrase on their phone to find local services based on their current location. An example of usage would be a person looking for a local hotel after a tiring journey or taxi company after a night out. The services can also come with a map and directions to facilitate the user.

What was the next step? GOOG-411. This is another but this time voice-activated mobile search. The free service allows callers to access Google’s local information through voice search. There is no doubt, that mobile voice search is simpler and more convenient for the callers than typing on the phone’s buttons.

“I’d have to be a visionary to be vindicated, and I’m making no such claim. It’s just hard to ignore that most people prefer talking in their phones to typing on them, and a mobile search engine that made voice search possible might have an easier time finding an audience”, said Bryson Meunier, Product Champion, Natural Search in a posting at www.findresolution.com. For the same reasons Meunier believes that mobile visual search could be bigger than voice search.

How do the searchers initiate a visual query? Simply by snapping a photo of something with their phone, which the mobile search engine processes with algorithms and returns relevant digital content based on its interpretation of the user’s visual query.

Visual Search is now gathering popularity. At the Cebit trade show in Germany, Vodafone demonstrated Otello, a search engine that uses images as input. Users send pictures via MMS (Multimedia Messaging Service) from their mobile phones. Otello then returns information relevant to the picture to the mobile phone, just like a normal search engine. There are other examples of companies like SnapNow and Mobot that have actually been offering this service for a few years. Google has its own Mobile Visual Search engines in the face of Never Vision.

Of course, the audience for mobile visual search is currently not so large, but it might be just a matter of time, predicted Meunier.

신고
Posted by myditto
search2009.01.21 12:00
from http://gizmodo.com/340788/hitachi-builds-15+inch-ultra-thin-plasma-to-go-with-its-15+inch-lcds

Similarity Based Image Retrieval System - With the volume of data already at unprecedented levels and expected to continue to increase rampantly, technology enabling quick searches of still and video images is much in demand. In response, Hitachi has developed a Similarity-Based Image Retrieval technology, a search engine for just such large-scale image and video archives. Similarity-Based Image Retrieval technology automatically extracts quantified information intrinsic to the image — such as color, shapes and forms — and runs searches to locate a match. This innovative search technique can be used for something as basic as searching for a movie scene or image on a camcorder to something as complex as searching for facial imagery in security, video surveillance or law enforcement applications.

Hitachi 연구소에서 진해되어왔던 프로젝트가 적용되어 빛을 보는 것 같습니다. 하지만 데모 이미지가 없어서 성능이 어느 정도인지는 알 수가 없습니다. 이전에 CHI나 UIST에 만났던 연구원들은 꽤 재미있는 일들을 많이 하는 것 같아서 부러웟던 기억이 나네요. 
신고
Posted by myditto

티스토리 툴바