Tuesday, October 14, 2014

Baby Talk and AI

I am very fascinated by AI recently and can't stop thinking how human can see and understand things and languages.

I quote this from Internet:

Baby Talk Milestones
  • Baby talk at 3 months. At 3 months, your baby listens to your voice, watches your face as you talk, and turns toward other voices, sounds, and music that can be heard around the home. Many infants prefer a woman's voice over a man's. Many also prefer voices and music they heard while they were still in the womb. By the end of three months, babies begin "cooing" -- a happy, gentle, repetitive, sing-song vocalization.
  • Baby talk at 6 months. At 6 months, your baby begins babbling with different sounds. For example, your baby may say "ba-ba" or "da-da." By the end of the sixth or seventh month, babies respond to their own names, recognize their native language, and use their tone of voice to tell you they're happy or upset. Some eager parents interpret a string of "da-da" babbles as their baby's first words -- "daddy!" But babbling at this age is usually still made up of random syllables without real meaning or comprehension.
  • Baby talk at 9 months. After 9 months, babies can understand a few basic words like "no" and "bye-bye." They also may begin to use a wider range of consonant sounds and tones of voice.
  • Baby talk at 12 months. Most babies say a few simple words like "mama" and "dadda" by the end of 12 months -- and now know what they're saying. They respond to -- or at least understand, if not obey -- your short, one-step requests such as, "Please put that down."
  • Baby talk at 18 months. Babies at this age say up to 10 simple words and can point to people, objects, and body parts you name for them. They repeat words or sounds they hear you say, like the last word in a sentence. But they often leave off endings or beginnings of words. For example, they may say "daw" for "dog" or "noo-noo's" for "noodles."
  • Baby talk at 2 years. By age 2, babies string together a few words in short phrases of two to four words, such as "Mommy bye-bye" or "me milk." They're learning that words mean more than objects like "cup" -- they also mean abstract ideas like "mine."
  • Baby talk at 3 years. By the time your baby is age 3, his or her vocabulary expands rapidly, and "make-believe" play spurs an understanding of symbolic and abstract language like "now," feelings like "sad," and spatial concepts like "in."
Below is my own thoughts on AI:
  • We may have an understanding and simulation on how brains work (deep neural network), but we are training it in an imperfect way.  Basically we are just like giving a baby a heavy book of images with strange bounding box and labels and expect him/her to understand the world.
  • One immediate approach may be adding more profound information such as depth, shape, and abstract concepts in mathematics (which is believed to be able to derive from a set of rules), and teach the computer in a way we teach a baby, combining with the way him/her perceives the world (they have two eyes so they always take in stereo pairs).
  • Feature representations are definitely important. Although I believe data-driven is not that important. Imaging TREE. I have seen many trees before, but not all the trees in the world. I also can't exactly recall any tree I have ever seen. All I have is the concept, which is a combination of texture, shape, etc. If you illustrate a tree, I will probably be able to tell it when you are done with the trunk and some leaves (very few drawings are sufficient). You can also give me a very small fraction of the tree by showing me the trunk and its texture and I can probably tell it is part of a tree. So I believe we should take all of these features into account.
  • Structure is not that important. Even if a head is below a body, or a tree is in the sky, I can still tell it is a head or a tree. Also in natural language processing, I doubt anyone will teach a baby how to speak by starting from giving him a set of grammar rules and what is a noun and a verb. 
  • There is still a long way to go before anyone can claim real AI. ImageNet is definitely a good start, but it is just nouns from a dictionary book, and I believe we should combine other abstract concepts such as shape/space positions, and teach computers in some way. An image has more information than a set of bounding box and labels. A difference might be that computers are just passively taking in knowledge but a human beings can see, listen, eat, walk, explore and feel. We should definitely give computers some feedback as incentives. I believe we have most of the the tools we need (DNN, reinforcement learning, feature representations...), and it is time to combine all of them to push AI to a new front.

Thursday, October 9, 2014

The Future of Deep Learning

Quote from Yann LeCun:

What areas do you think are most promising right now for people who are just starting out?
  • representation learning (the current crop of deep learning methods is just one way of doing it)
  • learning long-term dependencies
  • marrying representation learning with structured prediction and/or reasoning
  • unsupervised representation learning, particularly prediction-based methods for temporal/sequential signals
  • marrying representation learning and reinforcement learning
  • using learning to speed up the solution of complex inference problems
  • theory: do theory (any theory) on deep learning/representation learning
  • understanding the landscape of objective functions in deep learning
  • in terms of applications: natural language understanding (e.g. for machine translation), video understanding
  • learning complex control.

Wednesday, October 1, 2014

Boost File and Folder Operations

#include"boost/filesystem.hpp"

using namespace boost::filesystem;

remove_all("train"); //remove folder

create_directory("train"); //create folder

exists(image_name); //check file exist

copy_file(image_name,out_image_name); //copy file

Link library:

/usr/lib/x86_64-linux-gnu/libboost_filesystem.so