This blog will redirect 2 http://trancemp3s.blogspot.com/ in 6 secs

Monday, November 30, 2009

Computer Finally Emulate the Human Brain

Scientists at the Lulea University of Technology announce the development of a new computer model, which is perfectly capable of mimicking a pair of human brain functions in a digital environment. A number of potentially-groundbreaking applications spring to mind, including the development of machines that are able to analyze their own behavior, find their shortcomings, and then fix them. Another possibility would be the conception of machines that are able to suppress the impact of noise in electrical circuits. Each of the innovation could revolutionize a large number of fields of research.

“We have developed a model of how the various sources of information that complement each other, can get a better idea of what is happening. Better to the extent that we may see more than what the different parts look. We have a model, that in important respects, has the same behavior that is measured by researchers that are investigating the nervous system,” explains LUT researcher Tamas Jantvik. The new computer model features a three-module architecture, with each of the individual modules capable of acting like a different “sense”. One of the devices is currently used to model how visual information and stimuli are processed by the brain.

A second one is in charge of doing the same thing for auditory data, whereas a third is in charge of integrating the first two, and making them work together. The main goal of the investigation is to produce such an advanced system, that scientists will become able to take data on how cortical processes such as sight and hearing work, and then include them into engineering practical applications. “An example is when it turns lock in your ears, than we human beings notice that something is 'weird'. There are systems that are designed to detect when a sensor is faulty, but they do not solve the problem the same way that biology does, and it remains to see which method is best,” the scientist adds.

One of the most impressive things about the new model is the fact that it can accurately modify its perception on a sensory input, when a second input begins being added. This is precisely how our brains function, adapting on the go to impressive amounts of data. All these signals come from our five primary senses – touch, hearing, taste, smell and sight – but also from secondary networks, such as the one determining balance, spatial orientation, and so on. What LUT researchers are trying to do is create a machine that is able to integrate this type of data on its own, without having algorithms teaching it how, AlphaGalileo reports.

No comments: