Blog

149 posts

NANOCARS into ROBOTICS

 

Material Science research is now entering a new phase where the structure and properties of materials can be investigated, characterized and controlled at the nanoscale. New and sometimes unexpected material properties appear at the nanoscale, thus bringing new excitement to this research field. In this talk, special emphasis will be given to one-dimensional nanotubes and nanowires because they exhibit unusual physicalproperties, due to their reduced dimensionality and their enhanced surface/volume ratio. These unusual properties have attracted interest in their potential for applications in novel electronic, optical, magnetic and thermoelectric devices.


Another feature of nanotechnology is that it is the one area of  research and development that is truly multidisciplinary. Research at the nanoscale is unified by the need to share knowledge on tools and techniques, as well as information on the physics affecting atomic and molecular interactions in this new realm.


So now we are going to introduce the nano technology into the field of robotics to achieve realistic movements which is a real dream of the human for more than 5 years. By introducing this nervous system to the robot we have exactly 2 advantages that is we can pass information same time and we can also the movements at the same time the cars reached the actual part where the movement as be caused. So we need have some knowledge about nano technology and from these we are going to see what the nano cars means and then how we are going to see the introduction of the nano cars into the neuron system.

 

 

 

{module Phoca Facebook Comments|none}

.

Robots Can Ape Us, But Will They Ever Get Real?

 

Robots Can Ape Us, But Will They Ever Get Real?


Bonding

Moya Bosley gets up close with Quasi. Her dad, Will, was on the team that built her robot buddy.

One of the most profound questions of engineering, arguably, is whether we will ever create human-level consciousness in a machine. In the meantime, robots continue to take tiny little bot steps in the direction of faux humanity. Take Quasi, for instance, a robot dreamed up by Carnegie Mellon students that mimics the behavior of a 12-year-old boy [see “Heart of a New Machine” by Kim Krieger, in this issue]. Quasi’s “moods” depend on what?s been happening in his environment, but rather than being driven by prepubescent biology, they are architected by an elaborately scripted software-based behavioral model that triggers his responses. Quasi lets you know how he’s “feeling” through the changing colors of his LED eyes and his body language.

Other technologies are emulating more straightforward human traits. In the 9 June issue of Science, Vivek Maheshwari and Ravi F. Saraf of the University of Nebraska-Lincoln described their invention of a sensor that could allow robots to perceive temperature, pressure, and texture with exquisite sensitivity. Their sensor can detect surface details to within a pressure of about 10 kilopascals and distinguish features as small as 40 micrometers across?a sensitivity comparable to that of a human finger.

The Nebraska team is working on medical applications for the sensor. But it’s the idea of covering portions of a robot’s surface, particularly its “hands,” with these sensors that’s been making headlines.

Right now there are robots with increasingly sophisticated perceptual abilities and small behavioral repertoires operating in real-life environments. There are underwater vehicles that can map large swathes of sea bottom with total autonomy. There are computers operating on big problems at blazing computational speeds. But we still seem to be far away from that moment when our computational devices become autonomous entities with minds and brains–or the machine equivalent–of their own.

People have speculated about such a moment for decades, and most recently, ideas surrounding the questions of whether and when machine intelligence could equal and then surpass our own biological braininess have been subsumed into something called the Singularity. Popularized by science-fiction author and computer scientist Vernor Vinge in a 1983 article in Omni magazine, it has its early roots in the ideas of such cyberneticists as John von Neumann and Alan Turing. Notions about the Singularity–when it will happen, how it will happen, what it means for human beings and human civilization–come in several flavors. Its most well-known champions are roboticist Hans Moravec and computer scientist Raymond Kurzweil, who argue that when machine sapience kicks in, the era of human supremacy will be over. But it will be a good-news/bad-news situation: Moravec sees an era of indulgent leisure and an end to poverty and want; Kurzweil looks forward to uploading his brain into a computer memory and living on, in effect, indefinitely. But ultimately there’s also a good chance we’ll be booted off our little planet. Moravec goes so far as to predict that this massive machine intelligence will absorb the entire universe and everything in it, and that we will become part of the contents of this greater-than-human intelligence?s infinite knowledge database.

How would it work? According to Vinge’s vision, once computer performance and storage capacity rival those of animals–a phase we are beginning to enter–superhumanly intelligent machines capable of producing ever more intelligent machines will simply take over. This intellectual runaway, writes Vinge, “will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected–perhaps even to the researchers involved. (‘But all our previous models were catatonic! We were just tweaking some parameters….’) If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.”

Some thinkers dismiss the Singularity as “the rapture of the nerds.” Others believe it’s just a matter of time. Picking up on the good-news/bad-news theme, the Institute for the Future’s Paul Saffo has remarked: “If we have superintelligent robots, the good news is that they will view us as pets; the bad news is they will view us as food.”

 

 

 

{module Phoca Facebook Comments|none}

.

Intel Core i7 Review: Nehalem Gets Real

 

Nehalem is here.

Anticipation for Intel’s latest CPU architecture rivals the intensity for the original Core 2 Duo. It’s not just that Nehalem is a new CPU architecture. Intel’s new CPU line also brings along with it a new system bus, new chipsets, and a new socket format.

Today, we’re mainly focusing on the Core i7 CPU and its performance compared to Intel’s Core 2 quad-core CPUs. There’s a ton of data to sift through just on CPU performance. We’ll have ample opportunity to dive into the platform, and its tweaks, in future articles.

Intel will be launching three new Core i7 products in the next couple of weeks, at 2.66GHz, 2.93GHz, and 3.20GHz, at prices ranging from $285 to $999 (qty. 1,000). That’s right: You’ll be able to pick up a Core i7 CPU for around $300 fairly soon. Of course, that’s not the whole story: You’ll need a new motherboard and very likely, new memory, since the integrated memory controller only supports DDR3.

In the past several weeks, we’ve been locked in the basement lab, running a seemingly endless series of benchmarks on six different CPUs. Now it’s time to talk results. While we’ll be presenting our usual stream of charts and numbers, we’ll try to put them in context, including discussions of how and when it might be best to upgrade.

Let’s get started with a peek under the hood.

Core i7 Genesis
The Core i7 is Intel’s first new CPU architecture since the original Core 2 shipped back in July, 2006. It’s hard to believe that the first Core 2 processors shipped over two years ago.

Since then, Intel has shipped incremental updates to the product line. Quad-core Core 2 CPUs arrived in November 2006, in the form of the QX6700. AMD was quick to point out that Intel’s quad-core solutions weren’t “true” quad-core processors, but consisted of two Core 2 Duo dies in a single package. Despite that purist objection, Intel’s quad-core solutions proved highly successful in the market.

The original Core 2 line was built on a 65nm manufacturing process. In late 2007, Intel began shipping 45nm CPUs, code-named Penryn. Intel’s 45nm processors offered a few incremental feature updates, but were basically continuations of the Core 2 line.

In the past year, details about Nehalem began dribbling out, culminating with full disclosure of the Core i7 architecture at the August, 2008 Intel Developer Forum. If you want more details about Nehalem’s architecture, that article is well worth a read. However, we’ll touch on a few highlights now.

Cache and Memory
The initial Core i7 CPUs will offer a three-tiered cache structure. Each individual core contains two caches: a 64K L1 cache (split into a 32K instruction cache and a 32K data cache), plus a 256K unified L2 cache. An 8MB L3 cache is shared among the four cores. That 256K L2 cache is interesting, because it’s built with an 8-T (eight transistors per cell) SRAM structure. This facilitates running at lower voltages, but also takes up more die space. That’s one reason the core-specific L2 cache is smaller than you might otherwise expect.

Like AMD’s current CPU line, Nehalem uses an integrated, on-die memory controller. Intel has finally moved the memory controller out of the north bridge. The current memory controller supports only DDR3 memory. The new controller also supports three channels of DDR3 per socket, with up to three DIMMs per channel supported. Earlier, MCH-style memory controllers only supported two channels of DRAM.

The use of triple-channel memory mitigates the relatively low, officially supported DDR3 clock rate of 1066MHz (effective.) In conversations with various Intel representatives, they were quick to point out that three channels of DDR3-1066 equates to 30GB/sec of memory bandwidth

The integrated memory controller also clocks higher than one built into a north bridge chip, although not necessarily at the full processor clock speed. This higher clock, plus the lack of having to communicate over a north bridge link, substantially improves memory latency.

To facilitate the integrated memory controller, Intel developed a new, point-to-point system connect, similar in concept to AMD’s HyperTransport. Known as QuickPath Interconnect or QPI for short, the new interconnect can move data at peak rates of 25GB/sec (at a 6.4 gigatranfers per second base). Note that not all Nehalem processors will support the full theoretical bandwidth. The Core i7 940 and 920 CPUs support the 4.8 gigatransfer per second base rate, with a maximum throughput of 19.2GB/sec per channel. That’s still more than enough bandwidth for three DDR3-1066 memory channels.

Improvements to the Base Core Architecture
Core i7 boasts a substantial set of enhancements over the original Core 2 architecture, some of which are more subtle than others.

Let’s run down some of the more significant enhancements, in no particular order.

* The Return of Hyper-Threading—Core i7 now implements Hyper-Threading, Intel’s version of simultaneous multithreading. Each processor core can handle two simultaneous execution threads. Intel added processor resources, including deeper buffers, to enable robust SMT support. Load buffers have been increased from 32 (Core 2) to 48 (Core i7), while the number of store buffers went from 20 to 32.
* New SSE4.2 instructions—Intel enhanced SSE once again, by adding instructions that can help further speed up media transcoding and 3D graphics.
* Fast, unaligned cache access—Before Nehalem, data needed to be aligned on cache line boundaries for maximum performance. That’s no longer true with Nehalem. This will help newer applications written for Nehalem, more than older ones, only because compilers and application authors often took great care to align data along cache line boundaries.
* Advanced Power Management—The Core i7 actually contains another processor core, much tinier than the main cores. This is the power management unit, and is a dedicated microcontroller on the Nehalem die that’s not accessible from the outside world. Its sole purpose is to manage the power envelope of Nehalem. Sensors built into the main cores monitor thermals, power and current, optimizing power delivery as needed. Nehalem is also engineered to minimize idle power. For example, Core i7 implements a per core C6 sleep state.
* Turbo Mode—One interesting aspect of Core i7’s power management is Turbo Mode (not to be confused with Turbo Cache). Turbo mode is a sort of automatic overclocking features, in which individual cores can be driven to higher clock frequencies as needed. Turbo Mode is treated as another sleep state by the power management unit, and operates transparently to the OS and the user.

 

 

 

 

{module Phoca Facebook Comments|none}

 

.

Monkey’s Brain Can “Plug and Play” to Control Computer With Thought

 

Monkey’s Brain Can “Plug and Play” to Control Computer With Thought

Monkey's Brain Can "Plug and Play" to Control Computer With Thought
21 nov 2009—Our brains have a remarkable ability to assimilate motor skills that allow us to perform a host of tasks almost automatically—driving a car, riding a bicycle, typing on a keyboard. Now add another to the list: operating a computer using only thoughts.

Researchers at the University of California, Berkeley, have demonstrated how rhesus monkeys with electrodes implanted in their brains used their thoughts to control a computer cursor. Once the animals had mastered the task, they could repeat it proficiently day after day. The ability to repeat such feats is unprecedented in the field of neuroprosthetics. It reflects a major finding by the scientists: A monkey’s brain is able to develop a motor memory for controlling a virtual device in a manner similar to the way it creates such a memory for the animal’s body.

The new study, which should apply to humans, provides hope that physically disabled people may one day be able to operate advanced prosthetics in a natural, effortless way. Previous research in brain-machine interfaces, or BMIs, had already shown that monkeys and humans could use thought to control robots and computers in real time. But subjects weren’t able to retain the skills from one session to another, and the BMI system had to be recalibrated every session. In this new study, monkey do, monkey won’t forget.

”Every day we just put the monkeys to do the task, and they immediately recalled how to control the device,” says Jose Carmena, an IEEE senior member and professor of electrical engineering, cognitive science, and neuroscience who led the study. ”It was like ’plug and play.’”

Carmena and Karunesh Ganguly, a postdoc in Carmena’s lab, describe their work in a paper today in PLoS Biology.

The findings may ”change the whole way that people have thought about how to approach brain-machine interfaces,” says Lena Ting, a professor of biomedical engineering at Emory University and the Georgia Institute of Technology, in Atlanta. Previous research, she explains, tried to use the parts of the brain that operate real limbs to control an artificial one. The Berkeley study suggests that an artificial arm may not need to rely on brain signals related to the natural arm; the brain can assimilate the artificial device as if it were a new part of the body.

Krishna Shenoy, head of the Neural Prosthetic Systems Laboratory, at Stanford University, says the study is ”beautiful,” adding that the ”day-over-day learning is impressive and has never before been demonstrated so clearly.”

At the heart of the findings is the fact that the researchers used the same set of neurons throughout the three-week-long study. Keeping track of the same neurons is difficult, and previous experiments had relied on varying groups of neurons from day to day.

The Berkeley researchers implanted arrays of microelectrodes on the primary motor cortex, about 2 to 3 millimeters deep into the brain, tapping 75 to 100 neurons. The procedure was similar to that of other groups. The difference was that here the scientists carefully monitored the activity of these neurons using software that analyzed the waveform and timing of the signals. When they spotted a subset of 10 to 40 neurons that didn’t seem to change from day to day, they’d start the experiment; several times, one or more neurons would stop firing, and they’d have to restart from scratch. But the persistence paid off.

Monitoring the neurons, the scientists placed the monkey’s right arm inside a robotic exoskeleton that kept track of its movement. On a screen, the monkey saw a cursor whose position corresponded to the location of its hand. The task consisted of moving the cursor to the center of the screen, waiting for a signal, and then dragging the cursor onto one of eight targets in the periphery. Correct maneuvers were rewarded with sips of fruit juice. While the animal played, the researchers recorded two data sets—the brain signals and corresponding cursor positions.

Photobucket
BRAIN POWER: During manual control [left], the monkey maneuvers the computer cursor while the researchers record the neuronal activity, used to create a decoder. Under brain control [right], the researchers feed the neuronal signals into the decoder, which then controls the cursor.

The next step was to determine whether the animal could perform the same task using only its brain. To find out, the researchers needed first to create a decoder, a mathematical model that translates brain activity into cursor movement. The decoder is basically a set of equations that multiply the firing rates of the neurons by certain numbers, or weights. When the weights have the right values, you can plug the neuronal data into the equations and they’ll spill out the cursor position. To determine the right weights, the researchers had only to correlate the two data sets they’d recorded.

Next the scientists immobilized the monkey’s arm and fed the neuronal signals measured in real time into the decoder. Initially, the cursor moved spastically. But over a week of practice, the monkey’s performance climbed to nearly 100 percent and remained there for the next two weeks. For those later sessions, the monkey didn’t have to undergo any retraining—it promptly recalled how to skillfully maneuver the cursor.

The explanation lies in the behavior of the neurons. The researchers observed that the set of neurons they were monitoring would constantly fire while the animal was in its cage or even sleeping. But when the BMI session began, the neurons quickly locked into a pattern of activity—known as a cortical map—for controlling the cursor. (The researchers replicated the experiment with another monkey.)

The study is a big improvement over early experiments. In past studies, because researchers didn’t keep track of the same set of neurons, they had to reprogram the decoder every time to adapt to the new cortical activity. The changes also meant that the brain couldn’t form a cortical map of the prosthetic device. That limitation raised questions about whether paralyzed people would be able to use prosthetics with enough proficiency to make them really useful.

The Berkeley scientists showed that the cortical map can be stable over time and readily recalled. But they also demonstrated a third characteristic.

”These cortical maps are robust, resistant to interference,” says Carmena. ”When you learn to play tennis, that doesn’t make you forget how to drive a car.”

To demonstrate that, the researchers taught the monkey how to use a second decoder. To create the new decoder, they again recorded neuronal activity while the animal manually moved the cursor using the exoskeleton arm. The new data sets contained small fluctuations compared with the original ones, resulting in different weights for the equations. Using a new decoder is analogous to giving a different racket to a tennis player, who needs some practice to get accustomed to it.

As expected, the monkey’s performance was poor at first, but after just a few days it reached nearly 100 percent. What’s more, the researchers could now switch back and forth between the old and new decoders. All the animal saw was that the cursor changed color, and its brain would promptly produce the signals needed. This wasn’t a one-trick monkey, so to speak.

But perhaps more surprising, the researchers also tested a shuffled decoder. They took one of the existing decoders and randomly redistributed the weights in the equations among the neurons. This meant that the new decoder, unlike the early ones, had no relationship to actual movements of the monkey’s arm. It was a bit like giving a tennis player a hammer instead of a different racket.

What followed was a big surprise: After about just three days, the monkeys learned the new decoder. Just as before, practice allowed the neurons to develop a cortical map for the new task.

”It’s pretty remarkable that it could adapt to basically a corrupted decoder,” says Nicho Hatsopoulos, a professor of computational neuroscience at the University of Chicago. He says there’s a lot of focus in the field to build better and better decoders, but the new results suggest it may not be that important ”because the monkey will learn to improve its own performance.”

Carmena believes that the brain’s ability to store prosthetic motor memories is a key step toward practical BMI systems. Yet he emphasizes that it’s hard to predict when this technology will become available and is careful not to give patients false expectations. He says that the improvements needed include making the BMI systems less invasive and able to incorporate more than just visual feedback, with prosthetics that can provide users with tactile information, for example.

Still, he knows where he wants to go.

”I have this idea for a long-term goal,” he says. ”Can you tie your shoe in BMI mode?”

 

 

 

{module Phoca Facebook Comments|none}

.

Industrial Visit

The new execom of IEEESBCET conducted its 1st industrial  visit on  29th January, 2011(Saturday).40 students in all from electrical, electronics,  applied and computer science were taken to  Kerala Electrical & Allied  Engineering Co. Ltd (KEL), Kundara. We set out  from college at 7 in the  morning  and paused  on the way for breakfast. We reached KEL by 9 and was greeted by the  GM; Mr.Sreekumar sir. We were assisted by the employees during the visit. Cast iron and scrap metals  were melted in an induction furnace and mild steel was made. The slag is  removed occasionally.

 

The various sands used in the foundry process were explained and the ways of mixing them was briefly  demonstrated.  Preparations for  moulding were demonstrated and the casting process was explained. The preparation of are done there. Pulleys (axial and alternator) and end shields are made here.

 

Next we visited the alternator section where rotor and stator are made. The 18mm thick steel sheet for yoke is supplied by SAIL. laminations of mild  steel were packed together on the rotor and pressure was applied to reduce  air gaps. Likewise  laminations of mild steel were used to make stator poles  too. The windings were laid on the stator and  Kraft paper was used as an insulator.

Continue reading