Showing posts with label Pattern Recognition. Show all posts
Showing posts with label Pattern Recognition. Show all posts

15 January 2011

My Web Browsing Turned into a Newspaper

My life is an open book.
What the machine learning people I follow on Twitter are reading.
And a custom paper.

07 September 2010

13 October 2009

Machine Learning While I Work

I am setting up Postfix so I have spare time as I try things out. This post is about the things I am reading or watching in the background.

Taskforce on Context-Aware Computing
I went to a lecture called  Open Mobile Miner (OMM): A System for Real Time Mobile Data Analysis. There is a video here, a description of OMM here and lecture slides here (pdf).

Shonali Krishnaswamy's group are making software that does some analysis of data on a smart phone before uploading it, thereby reducing the phone's power consumption by reducing communications. Their examples include ECG output, traffic congestion metrics and taxi location data. The data in their examples is scalar and sampled at 0.5 Hz or less so it is hard to see why a simple store-and-forward scheme would not achieve much the same thing. I guess I need to read their publications more deeply.



Statistical Learning as the Ultimate Agile Development Tool
by Peter Norvig is an overview of modern practical machine learning. The summary is focus on the data, not the code.



Learning Theory
by Mark Reid was an introduction to some theoretical aspects of machine learning presented in a summer school in Canberra in January 2009.


Now some videos of how machine learning can be applied to models of the face.

Changes of facial features on the of dominance, trustworthiness and competence dimensions in a computer model developed by Oosterhof & Todorov (2008).


Now it is time to start watching a video on distributed computing

Swarm: Distributed Computation in the Cloud from Ian Clarke on Vimeo.

27 February 2009

Replacing Sol Trujillo. Implications for Sergey Brin, Larry Page, Steve Ballmer, Egon Zehnder and Peter Norvig.

Sol Trujillo has announced his intention to leave Telstra. Contrary to what my smart phone developer friends tell me, he does not appear to have been forced out for losing a development version of Windows Mobile. Executive search company Egon Zehnder have been retained to find his replacement.

Telstra are probably paying Egon Zehnder a lot for this work. After reading about a recent study , I wonder if they need to.

Swiss adults unfamiliar with French politics were shown 57 pairs of photos of opponents from an old French parliamentary election and asked to pick which ones looked most competent. In a separate experiment, Swiss kids ages 5 to 13 played a computer game that enacted Odysseus' trip from Troy to Ithaca. Then, using the same pairs of photos, researchers asked the kids which candidate they'd choose to captain their ship. In both experiments, the adults and children tended to pick the winners of the election.
If kids can pick winners from photos then it should be possible to train a face recognition type of algorithm to do the same thing. (I will explain why in a later post) . If it possible to pick election winners then it should be no more difficult to pick successful CEOs, who share many characteristics with successful politicians.

Running a few photos through an algorithm should be lot easier and cheaper than the Egon Zehnder process.

While it is being used to find the next Telstra CEO, it could answer some other big questions like which of Sergey Brin, Larry Page and Steve Ballmer would make the best CEO.

Maybe this is why Google have hired an artificial intelligence guy like Peter Norvig.

14 February 2009

Peter Norvig's "An Exercise in Species Barcoding"!

On  Charles Darwin's 200th birthday, I was wondering what percentage of those high achievers working at Google USA were among the  the 37% of Americans who accept that natural selection provides the best explanation of the world's species.

Peter Norvig is one of those high achieving Google employees and this week he posted a link to An Exercise in Species Barcoding to his blog. Some excerpts are below.

Recently I've been looking at the International Barcode of Life project. The idea is take DNA samples from animals and plants to help identify known species and discover new ones. While other projects strive to identify the complete genome for a few species, such as humans, dogs, red flour beetles and others, the barcoding project looks at a short 650-base sequence from a single gene. The idea is that this short sequence may not tell the whole story of an organism, but it should be enough to identify and distinguish between species. It will be successful as a barcode if (a) all (or most) members of a species have the same (or very similar) sequences and (b) members of different species have very different sequences. I was able to acquire a data set of 1248 barcode sequences, all of them Lepidoptera (butterflies and moths) from Australia. Each entry gives the name of the specimen (if known), the location it was collected, and a 659 base (i.e. ACTG) barcode.
The Big Questions
  • Can I figure a way to cluster the barcodes into species?
  • How many species are there in this data set?
  • Will there be a clear answer, or will there be many possible solutions?
  • Is the notion of a species even well-defined? That is, do the individuals cluster into groups with large-margin boundaries between them, or do they overlap?
Really we can only hope to answer this question with respect to this particular data set, but perhaps it will give us some insight into other data sets, and into the nature of species in general.

Answering the Big Questions

Now we can attempt to answer the questions.
  • Can I figure a way to cluster the barcodes into species?
    Yes. We can cluster barcodes together. We can get good agreement for about 96 or 97% of the individuals, but are uncertain of the remaining 3 or 4%.
  • How many species are there in this data set?
    I explored answers from 375 to 390, or equivalently 383±2%. There is some evidence (and some hunches) to support 384±1%, but I would hate to have to be more precise than that.
  • Will there be a clear answer, or will there be many possible solutions?
    The data does not seem to support a single answer. But asserting an answer within ±1% seems reasonable.
  • Is the notion of a species even well-defined?
    Inconclusive from this data. There are 1% to 4% or so of individuals that are on the border between two species, according to this data. one way or another. But you could also say the glass is 95% full -- most individuals are conclusively clustered together, in a way that makes sense to the person doing the collecting.
  • More generally, "species" is often defined as a "group of organisms capable of interbreeding and producing fertile offspring." That's a start, but it's not a perfect definition. First of all, the majority of organisms do not even reproduce sexually. Birds do it, bees do it, most macroscopic eukaryotes do it, but bacteria and archaea do not, nor do some plants and fungi. Second, what does "capable of" mean? Historically, the Capulets and Montagues did not interbreed (nor the Sharks and Jets), but most observers would say they would be capable. But what is an observer to say about two groups of frogs that disdain each other? How do we know if they are capable of interbreeding? Third, there is the problem of transativity of species membership. Consider the Ensatina salamander. These exist in the mountains surrounding the Central Valley in California. The mountains are laid out in a horseshoe shape, and as you traverse the horseshoe, you notice variations in the salamanders. Each variation can interbreed with its near neighbors, but the ones at the extreme western end cannot interbreed with those at the far eastern end. They can't be all one species, because they don't all interbreed, but then neighboring pairs do interbreed, so there is no clear answer as to where to draw the barriers. Biologists describe this as a ring species which is neither a single species nor a set of multiple discrete species. It seems we have to accept that species is a natural kind term which has clear prototypes -- paradigmatic cases where everyone can agree what is and isn't a species -- but does not have crisp boundaries.

08 February 2009

Face Recognition for Android?

A lot of  smart people end up at Google. Even Hartmut Neven is there. Does that mean that Android will have face recognition, general object recognition and gaze tracking from the phone's camera? It sounds straightforward: upload the picture in the Google computing cloud, analyze it and download the results. No need to run the image recognition locally on the phone the way Neven used to.

What is the face recognition API called? All I can find is the face detector API.

What Other People Are Saying
Did Google Pull a Neven with Enkin? speculates that Google are going to use image recognition as a way for mobile camera phones to interact with the world, in particular by machine-readable codes on printed pages. I surveyed machine readable codes for printed pages in a previous post .

This speculation is interesting because I always thought the killer app for mobiles would be image recognition + machine readable codes on printed pages + OCR + speech recognition + location awareness, along with an engine to deduce useful information from the data . Maybe it's a more obvious idea than I thought. Don't throw out your copy of Duda and Hart!

04 February 2009

Voice Search on Android (and Location Enabled Search)

This Google Mobile Blog just announced  the most useful mobile feature I have heard of for a while. Here is  a long quote from Jeff Hamilton's post:

You can start searching by voice with just the touch of a button. On the home screen search widget, look for the microphone button right next to the search box and the search button. Press that button, wait for the "Speak now" prompt, and then say your query. You'll soon see search results formatted for the Android browser.
Also, whenever you're in the Android browser, just press the "Menu" button and tap "Search". You'll see the same microphone button there too.
This makes doing successive voice-triggered searches -- and mobile web surfing -- easy and fast. Try speaking your favorite web sites, then tapping on the top search results to get to them.
Note that you can use the "Voice Dialer" app, which you can find on the main app menu, to search for your contacts with your voice to make a call. Or, simply long-press the green call button and follow the prompts on the splash screen.
This sounds really useful and it should drive mobile phone usage forward, especially when combined with location enabled search.
Latitude is a new feature of Google Maps for mobile, as well as an iGoogle gadget, that allows you to share your location with your friends and to see their approximate locations, if they choose to share them with you. You can use your Google account to sign in and easily invite friends to Latitude  from your existing list of contacts or by entering their email addresses. Google Talk is integrated with Latitude, so you and your friends can update your status messages and profile photos on the go and see what everyone is up to. You can also call, SMS, IM, or email each other within the app.
We've gone to great lengths to put this on as many smartphone devices as possible from day one so that most of the people you know will be able to use Latitude right away. There are two primary ways to use Latitude right now:
On your mobile phone: visit google.com/latitude from your phone's mobile browser to download Google Maps for mobile with Latitude. We currently support most of the popular smartphone platforms: Android, Blackberry, Symbian S60, and Windows Mobile, and we are hoping to see Latitude on the iPhone soon. It will be available through Google Mobile App, and you'll just need to download or update the app from the App store to find Latitude in the Apps tab.
On your computer: go to http://google.com/latitude from your browser and add the Latitude gadget to your iGoogle homepage. What's neat is that if you've installed Google Gears or if you're using Google Chrome, you can choose to automatically share your location from your laptop or desktop computer -- no smartphone required!
See also
Speech Recognition Providers

26 January 2009

Pattern Recognition and Smart Metering

This Electronics Weekly article says:

University of Oxford spin out Intelligent Sustainable Energy (ISE) is to make electricity meters so smart that they can recognise which domestic appliance is operating at any time...Instead, the firm's intellectual property covers algorithms that interpret mains voltage and current waveforms. "It is artificial intelligence - pattern recognition," said Donaldson, who would not go into any more detail.
This implies that each device in a house has a characteristic pattern of electricity usage that ISE's algorithms can identify. The ISE webpage  says
The core of the ISE solution is the intelligent energy monitor which connects to the energy supply at a single point. Using patented artifical intelligence and signal processing techniques developed over a number of years at Oxford University, the system analyses the electricity supply and calculates the power consumption of each appliance without the need for any other sensors. This information can then be communicated to the consumer through a variety of methods including local in-home displays, web portals and itemised billing services.
There is some more information in this story.

Intelligent Appliances
MIT's Project Oxygen has the notion of Intelligent Spaces
Space-centered computation embedded in ordinary environments defines intelligent spaces populated by cameras, microphones, displays, sound output systems, radar systems, wireless networks, and controls for physical entities such as curtains, lighting, door locks, soda dispensers, toll gates, and automobiles. People interact in intelligent spaces naturally, using speech, gesture, drawing, and movement, without necessarily being aware that computation is present.
Environmental devices, together called an E21, provide a local-area computational and communication back-plane for an intelligent space. E21s are connected to nearby sensors, actuators, and appliances, suitably encapsulated in physical objects. They communicate with each other and with nearby handheld devices (H21s) through dynamically configured networks (N21s). E21s provide sufficient computational power throughout the environment
Related Papers
Current Sensor Based Non-Instrusive Appliance Recognition (copy protected PDF) by Saito et al explains one way of finding the current usage patterns of electrical devices. They use Nearest Neighbor classification.

Exploration on Load Signatures

Estimation of Variable-Speed-Drive Power Consumption From Harmonic Content  paper proposes a VSD power estimation method based on observed correlations between fundamental and higher harmonic spectral content in current. The technique can be generalized to any load with signature correlations in harmonic content, including many power electronic and electromechanical loads.

Update,  21 October 2009
I am putting together some proposals for products based on HANs and pattern recognition. If anyone has similar interests please contact me on peter.williams.97@gmail.com to discuss collaboration opportunities.

03 September 2008

Google Picasa to have Face Recognition

Here is a round-up of the news articles from this morning. Look at the blog list at the bottom-right for more.

Cnet: "Revamped Google Picasa site identifies photo faces" The "name tag" feature presents users with collections of photos with what it judges to be the same person, then lets them click a button to affix a name. Once photographic subjects are named, users can browse an album of that individual on the fly.

The name tag feature groups like faces together to let users tag them with names a batch at a time.
The Picasa Web Albums name tag feature groups like faces together to let users tag them with names a batch at a time (click to enlarge).
(Credit: Google)
"

Techcrunch "Picasa Refresh Brings Facial Recognition" The facial recognition technology comes to Picasa thanks to an acquisition Google made in 2006 of Neven Vision, a company that specialized in matching facial detail with images already found in a centralized database. Picasa’a facial recognition technology works in much the same way.

Web Pro News "Googles picasa takes on facial recognition"

Analysis
It is interesting that Google chose Neven over companies such as

Idée's TinEye
Imprezzeo
Polar Rose
Riya , and
ilooklikeyou.com

Neven Vision were the creators of the NV1-norm algorithm that did so well in the NIST Face Recogntion Vendors Test .


According to this article Neven have a good patent portfolio in image search. Hartmut Neven was assistant professor of computer science at the University of Southern California at the Laboratory for Biological and Computational Vision. Later he returned as the head of the Laboratory for Human-Machine Interfaces at USC’s Information Sciences Institute.  Neven co-founded two companies, Eyematic for which he served as CTO and Neven Vision which he initially led as CEO. At Eyematic he developed real-time facial feature analysis for avatar animation Neven Vision pioneered mobile visual search for camera phones and was acquired by Google in 2006. Today he manages a team responsible for advancing Google’s object and face recognition technologies. I wonder if that means Neven is supervising all the SIFT work for Visual Rank

Detailed List of Neven patents . 
The key face recognition patent in this list appears to be US Patent 6,222,939 Granted April 24, 2001 Filed June 25, 1997
Abstract A process for image analysis which includes selecting a number M of images, forming a model graph from each of the number of images, such that each model has a number N of nodes, assembling the model graphs into a gallery, and mapping the gallery of model graphs into an associated bunch graph by using average distance vectors .DELTA..sub.ij for the model graphs as edge vectors in the associated bunch graph. A number M of jets is associated with each node of the associated bunch graph, and at least one jet is labeled with an attribute characteristic of one of the number of images. An elastic graph matching procedure is performed wherein the graph similarity function is replaced by a bunch-similarity function.
 

Premium No Name Brands

Try googling "premium no name brand" . The list of websites is small and probably includes this one.

With good analytics such as pattern recognition, near infrared spectrometry, liquid chromatography, bluetooth and other technologies and good supply chain management it should be possible to retailers to guarantee the quality levels associated with top brands without paying brand premiums to the brand owners.


Bottled water is the opposite of this idea.

02 September 2008

GWAP

GWAP stands for “Games With A Purpose” and it all started under the guidance of Luis von Ahn, an assistant professor in the Computer Science Department at Carnegie Mellon University...The idea is to bring human brainpower in to play with the learning ability of computers. Think of it as a transition between human understanding and computational thinking.

The GWAP website is here and it seems like fun.

27 July 2008

Dynamic Advertisement Provisioning for Digital Signage

Detecting advertisement viewers' demographic by analyzing their faces Recent articles from Random Ramblings.. and NY Times discuss how two companies TruMedia and Quividi, have systems that measure things about people who look at billboards, such as age, gender and demeanor and change the ads being displayed based on this. These two companies do this by installing cameras on the billboards and using face recognition tecbniques to provide the age, gender and other information.

Examples of these and other image recognition based audience measurement systems are 


Dynamic Advertisement Provisioning
Digital Signage Association calls the above technology  Dynamic Advertisement Provisioning. Here is a quote from the associationDynamic ad provisioning from facial recognition suggests an entirely new revenue model from better message targeting,” Bunn said. “In this revenue model, content is developed for locations where targeted viewers are expected. The content is placed in storage on the media player at that location for playout when triggered (rather than simply placing the ad into a playloop).”

The full Bunn article refered to in the previous paragraph is here.

Related Articles
  • Digital Signage ROI
  • Interactive Displays: Harrahs, Gestures, Kiosks
  • The Last Mile Of Retail The Shelf The Final Supply Chain Frontier says A recent report from the In-Store Implementation Sharegroup, titled “In-Store Implementation: Current Status and Future Solutions,” highlights the problems and opportunities that exist in shelf-level supply chain collaboration. Most alarmingly, the report estimates that suboptimal performance of in-store category management, shelf management, promotion and shopper marketing annually costs the retail industry 1 percent of gross product sales, or about $10 billion to $15 billion (per year). See also Oracl Retail

23 July 2008

Humor as Pattern Recognition or Possibly Male Aggression

http://www.scienceagogo.com/news/20080527183142data_trunc_sys.shtml posits that humor is an evolved type of pattern recognition.

http://www.world-science.net/othernews/071221_humor.htm says it is a form of male aggession.

It is also masks false advertising.

It looks like I need to do some more reading ...

Taking the Work out of Searching for a Mate

http://www.eyealike.com/news.php?spgId=pr_06262008

Press Release - 26 June 2008
Eyealike Delivers "My Type" Attraction Trait Levers to Bring Fresh Sparks to Online Dating Sites.
Combination of facial and physical attribute recognition allows dating websites to offer new ways for singles to factor in attraction to find more relevant matches