1) Pictures from the Net – Wednesday

a) From the Guardian’s “Satellite eye on Earth: September 2010” slide show

http://www.guardian.co.uk/environment/gallery/2010/oct/06/satellite-eye-on-earth-september-2010

The eye of hurricane Igor on 14 September, 2010, a category four hurricane. At the time this was taken, Igor was centered in the Atlantic Ocean and slowly moving west-northwest with maximum sustained winds of 213kph

 

The Lake Eyre basin, one of the world’s largest internally draining systems, in the heart of Australia. White cloud streaks stand in contrast to the vast amounts of crimson soil and sparse greenery of the “red centre”. The basin covers about 1.2 million sq km (about the size of France, Germany and Italy combined), including large portions of South Australia (bottom), the Northern Territory (upper left) and Queensland (upper right) and a part of western New South Wales (bottom right)

b) From the Guardian’s “24 Hours In Pictures”

http://www.guardian.co.uk/news/gallery/2010/oct/06/24-hours-in-pictures-gallery

 

Paris, France: A model wears an Antonio Marras outfit during the Italian fashion designer’s spring-summer 2011 show

 

Hungary: One of the emergency workers cleaning up villages flooded with toxic sludge from the Ajka aluminium works

I am not sure which of the above outfits looks more creepy.

2) Aiming to Learn as We Do, a Machine Teaches Itself.

The day of a machine with true artificial intelligence may be one step closer thanks to a research project by Carnegie Mellon University, NELL – Never-Ending Language Learning System.

Questions 

1. Do you believe that a machine will ever be built that can match the creative intelligence of man?

2. If an AI is built, what will its first thoughts about mankind be? Warm and fuzzy or Terminatorish?

I don’t believe we will ever build a machine with the ability to match our creative, abstract, thinking process.  One reason is that our own brains are evolving.  The brain of a Homo sapiens 1,000 years from today will have neuron connections that work more efficiently, and we will use a much greater percentage of our potential brain power than we do today.  Of course that may not be saying much.  🙂

If a machine is built that can match our own thinking ability it will likely have the intelligence to understand that cooperation is the best way to survive.  Humans & AIs will co-exist in harmony.  Of course I am taking about future man.  We have not yet learned how live in harmony, and still far too often choose conflict.

From a New York Times article about NELL, by Steve Lohr:

http://www.nytimes.com/2010/10/05/science/05compute.html?hpw=&pagewanted=print

“Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.

Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.

Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.

The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”

NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.

The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.”

“With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.

Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”

NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.

For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.

NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”

A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”

“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”

Advertisements