Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Monday, 28 March 2016

Microsoft's AI Bot

AI Bot

Microsoft’s Artificial Intelligence Chat Bot


Microsoft has developed a new artificial intelligence chat bot which tends to claim that it would be smarter the more one talk to it. Tay, the so-called bot has been built by Microsoft Technology and Research together with Bing team for the purpose of conducting research on conversational understanding. The Bing team had developed a related conversational bot, XiaoIce, for Chinese market in 2014.

Microsoft executives had dubbed XiaoIce `Cortana’s little sister’ after the company’s voice-activated Cortana personal assistant software of Redmond, Washington.The real world focus of the bot is to enable researchers to experiment and learn how people tend to talk to each other. Microsoft states that for bot which is available through Twitter as well as messaging platforms Kik and GroupMe, AI has been doing the role of a millennial and emojis have been included in the vocabulary and is clearly aimed at 18-24 years olds.

The bot seems to have little useful function for users though it has the potential of three varied method of communication and its website tay.ai tends to boast that the AI can talk through text, play games like guessing the meaning of a string of emojis and make comments on the photos sent to it.

Tay Designed to Engage & Entertain People


Till the time of writing, the bot had accumulated around 3,500 followers on Twitter but had sent over 14,000 messages, responding to questions, statements as well as general abuse within a matter of few seconds. The about section of Tay’s website stated the `Tay is designed to engage and entertain people where they connect with each other online via casual and playful conversation.

Tay tends to work depending on public data and with editorial inputs which have been created by staff and comedian. Microsoft has informed that `public data which has been anonymised is the primary data source of Tay and that data has been modelled, cleaned and filtered by the team creating Tay’. Besides the meme-tastic appeal of the bot, there seems to be a grave side to the research behind the AI. Making machine capable of communicating in a natural and human way is a main challenge for learning procedures.

Effort of Service to Comprehend How Humans Speak


Google too had recently updated its Inbox mail service recommending answers to emails and the smart reply feature offers three probable responses which are recommended by Google’s AI. Similar to Tay, Google informs that the more one uses smart replies, the better they will get. If a user desires to share with Tay, the bot tend to track the user’s nickname, gender, zip code, favourite food as well as the relationship status.

Users could delete their profiles on submission of a request through the Tay.ai contact form. In the field of virtual assistants and chat bots, Facebook’s M is also experimenting with the use of artificial intelligence in completing tasks. Though it has been partly controlled by humans, currently the systems are being condition to book restaurants and respond to some questions. The core of the service is an effort to comprehend how humans tend to speak and the best way to respond to them

Sunday, 10 January 2016

A Learning Advance in Artificial Intelligence Rivals Human Abilities

Artificial_Intelligence

Artificial Intelligence Surpassed Human Competences

Computer researchers had recently reported that artificial intelligence had surpassed human competences for a narrow set of vision related task. These developments are remarkable since the so-called machine vision methods are being a commonplace in various characteristics of life comprising of car-safety methods which tend to identify pedestrians and bicyclists and in video game controls, Internet search as well as factory robots.

 Researchers from the Massachusetts Institute of Technology, New York University together with the University of Toronto have recently reported a new kind of `one shot’ machine learning in the journal Science, wherein a computer vision program seemed to beat a group of humans in identifying handwritten characters founded on a single example. The program seems to have the ability of learning quickly the characters in a variety of languages as well as in generalizing from what it has learned.

The authors recommend that this ability is the same wherein humans tend to learn and understand perceptions. Bayesian Program Learning or B.P. L as the new approach is known is unlike the present machine learning technologies known as deep neural networks. Neural networks can be trained in recognizing human speech, identify objects in images or detect types of behaviour on being exposed to large sets of examples

Bayesian Approach

Though these networks may have been modelled on the behaviour of biological neurons, they have not yet learned the way human tend to do, in quickly acquiring new concepts. In comparison, the new software program defined in the Science article has the capabilities of recognizing handwritten characters on `seeing’ only a few or a single example.

The researchers had compared the capabilities of their Bayesian approach as well as other programming models utilising five separate learning tasks which involved a set of characters from a research set. This was known as Onmiglot which comprised of 1,623 handwritten characters sets from 50 languages.

 The images as well as the pen stokes that were need to create characters were taken. Joshua B, Tenenbaum, professor of cognitive science and computation at M.I.T. together with one of the authors of the Science paper had commented that `with all the progress in machine learning, it is amazing what one can do with lots of data and faster computers. But when one looks at children, it is amazing what they can learn from very little data and some come from prior knowledge and some is built in the brain’.

Imagenet Large Scale Visual Recognition Challenge

Moreover, the organizers of an annual academic machine vision competition also reported gains in lowering the error rate in software for locating and classifying objects in digital images. Alexander Berg, an assistant professor of computer science at the University of North Carolina, Chapel Hill had stated that he was amazed by the rate of progress in the field.

The competition which is known as the Imagenet Large Scale Visual Recognition Challenge pits the teams of researchers at government, academic as well as corporate laboratories against one another in designing programs in classifying as well as detecting objects. The same was won by a group of researchers at the Microsoft Research laboratory in Beijing, this year.

Tuesday, 22 December 2015

Facebook’s Artificial-Intelligence Software Gets a Dash More Common Sense

Facebook’s_Artificial-Intelligence

Artificial-Intelligence Researchers – Learn some of the Basic Physical Common Sense


In an attempt to discover how computers could learn some of the basic physical common sense, artificial intelligence researchers have undertaken a project for the same. For instance to comprehend unsupported objects tend to fall or a large object does not fit inside a smaller one, seems to be the main way human tend to predict, communicate and explain regarding the world.

Chief technology officer of Facebook, Mike Schroepfer, state that if machines are to be more useful, they would need the same type of good judgment of understanding. He had informed at a preview recently of results, he would share at the Web Summit in Dublin, Ireland, that they have got to teach computer systems to comprehend the world in the same way. Human beings at a young age, tend to learn the basic physics of reality and by observing the world.

Facebook had drawn on its image processing software in creating a technique which learned to predict if a stack of virtual blocks would tumble. The software tends to study by gaining access to images of virtual stacks or at times two stereo images like those that form a pair of eyes.

Crafting Software – Comprehend Images/Language of Deep Learning


It had been shown in the learning phase, that several various stacks some of which had toppled while the others did not. The simulation showed the learning software the result and after adequate examples, it was capable of predicting for itself with 90% accuracy if a certain stack would possibly tumble. Schroepfer comments that if one runs through a series of tests, it would beat most of the people.

Facebook’s artificial intelligence research group in New York had done the research. It concentrated on crafting software which could comprehend images as well as language utilising a technique of deep learning. Recently the group also showed off a mobile app with the potential of answering queries regarding the content of photos.

The director of the group, Yann LeCun who is also a professor at NYU, informed MIT Technology Review that the system for predicting when block would topple indicates that more complex physical simulation could be utilised in teaching additional basic principles of physical common sense. He added that `it serves to create a baseline if we train the systems uncontrolled and it would have adequate power to figure thing out like that’.

Memory Network


His group had earlier created a system known as `memory network’ which could pick up some of the basic common sense as well as verbal reasoning abilities by reading simple stories and now progressed, in helping in influencing a virtual assistant that Facebook tends to test known as M. M has more potential than Apple’s Siri or similar apps since it is powered by bank of human operators.

 However Facebook expects that they would steadily tend to become less important as the software learns to pitch queries for itself. Schroepfer informs that adding the memory network to M is showing how that could happen. By observing the interactions between people utilising M as well as the responds of customer service, it has learned already how to manage some of the common queries.

Facebook has not made a commitment of turning M into a widely available product; however Schroepfer states that the results indicate how it could be possible. He adds that the system has figured this out by observing humans and that they cannot afford to hire operators for the entire world but with the right AI system, they could organize that for the whole planet.