Technology Articles

17 posts

Hawk-Eye’s view of UDRS

 

 

Rightly or wrongly the way LBW is practically implemented at all levels is very different from the laws as defined by the MCC. I have been quite surprised when leading officials have explained a decision making process which doesn’t follow a procedure of applying the laws in a scientific manner. As such, umpiring is more of an art than a science.

 

The referral system to date has suggested that players do not have accurate judgement of LBW. Elite panel umpires do an excellent job, but the reality is no human can accurately and consistently officiate LBW and certainly not within the roots of the sport. Therefore, the game has evolved over the years with a group of umpires’ best guesses being judged by the less good guesses of players and even spectators 100 meters away. It has been quite amusing when Hawk-Eye has worked for 2 broadcasters at the same event – The Australian commentators have been convinced a ball was missing, whilst the Indian commentary team will be certain beyond doubt that the same ball was hitting.

 

It is therefore human nature for the role of umpiring to have evolved towards a path of least resistance or put another way to minimise the total number of annoyed points awarded by each team through out the course of the match. Pleased points are seldom awarded. The art of umpiring has evolved away from the science of applying the laws in the following examples:

A bowler bowls a peach of a delivery, pitches outside off and nips back sharply. The batsman shoulders arms and is beaten all ends up. The bowler feels fully deserving of a wicket having set him up perfectly with 3 preceding away swingers. The umpire sympathises with the bowler’s plea for justice such that “was the ball hitting the stumps?” could be brushed over in the overall sales pitch of the appeal. Certainly an umpire will receive less annoyed points with an OUT decision. The disgruntled batsman will find little support in the dressing room who may assign primary blame on poor shot selection.

The same is true in reverse for a full length delivery pitching on middle and leg which should have been punished through mid wicket for 4, but is missed and would have gone on to hit the outside of leg stump. Far more “reasonable doubt” is given on leg stump than its favoured sibling – off stump.

Dickie Bird was regarded by many as the best umpire of his time. Were his decisions better than the next umpire? Or was his personality and rapport with the players the real reason his career average annoyed points were low.

There is great variation for how people think LBW should be officiating. At one end there is the scientist who would argue for the precise definition within the laws. At the other end there is the artist who thinks the umpire should award the decision to the most worthy team based roughly on the criteria defined in the laws.

Hawk-Eye’s role in providing technology is to assist the umpire make the decision he would want to make without necessarily changing the compromise between subjectivity and definitive fact. There are many decisions over the course of a Test series where the artist and scientist would agree that a mistake has been made and it is these decisions which are most important to use technology to correct.

 

Under the existing protocol a ball has to be hitting the stumps by aprox 4.5cm to be definitely given out (assuming other elements are OK). This inner zone of the stumps – the “zone of certainty” is not there to account for inaccuracies in Hawk-Eye. Hawk- Eye is much more accurate than that, and I invite any sceptic to come and see how Hawk-Eye works before disagreeing. From independent tests conducted, pitching and interception points had an error of 2.6mm. The prediction was under 1cm in almost all cases. In the “extreme” cases where a batsmen is a long way down the wicket, or there is little data post bounce it would be under 2cm. Having a zone of certainty is there to maintain some of the art of umpiring and hence ensure that the use technology doesn’t change the fabric of the game.

 

The ICC, and in particular David Richardson, have done a good job in listening to feedback and responding to issues which have arisen. The improvement in the protocol is a testament to that. The ICC meet again in May, when the topic will be discussed. Below is a list of issues which they may consider:

On field call area

Extend the “on field call” area beyond the stumps. This would enable an OUT decision to remain as OUT if Hawk-Eye shows it just missing the stumps. I think England assigned more “annoyed points” when Swann’s successful LBW appeal to Morkel was over turned than South Africa would have done if the original decision was upheld. Equally the North decision in Australia. Jonathan Agnew suggested it should be extended above the stumps and to the off side, but not the leg side. One can understand the justification, but try explaining the reason to an American!

Losing a review

Should a fielding side lose a review when Hawk- ye returns an answer of “on field call” ie they were correct to review, just not correct enough?

Sex it up

“Sex up” the review system by giving it a better name and showing the process on the big screen in the ground. Hawk-Eye has added an extra element of drama and entertainment to tennis, which cricket doesn’t currently have. What is there to hide? A 3rd umpire shouldn’t be influenced by crowd noise any more than a standing umpire or a TMO in  rugby, where the replays are shown on the big screen.

Carry over unused reviews

Limit the number of unsuccessful reviews per innings, but allow any unused reviews from the 1st innings to be carried over to the 2nd. This prevents pointless reviews when sides are 9 wickets down.

Time taken to make a review

It is clear teams are already waiting for a signal from the dressing room. Some people think that this acceptable, but the majority appear to feel that this is wrong. On an operational level, the review process can be sped up. For example, it is not necessary to show 4 replays trying to determine if there has been an inside edge on a LBW appeal, if Hawk-Eye is available and is going to show a NOT OUT decision.

Communication

Communication of the review protocol so cricket fans fully understand it and the reasons for the decisions made. Currently I haven’t seen a good explanation on any cricket website and many of the arguments against the referral system are made by people who don’t understand it.

Sponsor

Finding a sponsor. Tennis now makes money from Hawk-Eye, and for the referral system to be viable long term, it can not be a drain on financial resources or entirely reliant on the broadcasters. Take in to account the reason for the initial decision The existing protocol is exposed to many annoyed points in the scenario when a batsman goes back to a ball which keeps a bit low and is clearly going on to hit the top of middle. The standing umpire gives a NOT OUT decision only because he thinks there has been an inside edge. The TV replays show conclusively that it was pad first, but the NOT OUT decision remains because the ball was hitting the stumps just above the zone of certainty.

Umpire vs player reviews

It appears that the only reason why the umpires are allowed to refer some decisions, but the players have to review others is historic. How would you explain the rationale to an American?

Other technical approaches to edges

The ICC puts out a request to tender to R & D companies to develop a touch sensitive tape which could be practically applied to the edges of a bat and would leave a mark for aprox 30 seconds if a ball hits it. This approach could potentially be faster, cheaper and more reliable for detecting an edge. It would also work at all levels of the game. Hotspot is a brilliant technology, but most “annoyed points” with the referralsystem are on edges. Even if a material could not be found, it would be a good PR exercise for the ICC in being pro-active in finding technical solutions rather than being entirely reliant on the broadcasters.

 

 

{module Phoca Facebook Comments|none}.

NANOCARS into ROBOTICS

 

Material Science research is now entering a new phase where the structure and properties of materials can be investigated, characterized and controlled at the nanoscale. New and sometimes unexpected material properties appear at the nanoscale, thus bringing new excitement to this research field. In this talk, special emphasis will be given to one-dimensional nanotubes and nanowires because they exhibit unusual physicalproperties, due to their reduced dimensionality and their enhanced surface/volume ratio. These unusual properties have attracted interest in their potential for applications in novel electronic, optical, magnetic and thermoelectric devices.


Another feature of nanotechnology is that it is the one area of  research and development that is truly multidisciplinary. Research at the nanoscale is unified by the need to share knowledge on tools and techniques, as well as information on the physics affecting atomic and molecular interactions in this new realm.


So now we are going to introduce the nano technology into the field of robotics to achieve realistic movements which is a real dream of the human for more than 5 years. By introducing this nervous system to the robot we have exactly 2 advantages that is we can pass information same time and we can also the movements at the same time the cars reached the actual part where the movement as be caused. So we need have some knowledge about nano technology and from these we are going to see what the nano cars means and then how we are going to see the introduction of the nano cars into the neuron system.

 

 

 

{module Phoca Facebook Comments|none}

.

Robots Can Ape Us, But Will They Ever Get Real?

 

Robots Can Ape Us, But Will They Ever Get Real?


Bonding

Moya Bosley gets up close with Quasi. Her dad, Will, was on the team that built her robot buddy.

One of the most profound questions of engineering, arguably, is whether we will ever create human-level consciousness in a machine. In the meantime, robots continue to take tiny little bot steps in the direction of faux humanity. Take Quasi, for instance, a robot dreamed up by Carnegie Mellon students that mimics the behavior of a 12-year-old boy [see “Heart of a New Machine” by Kim Krieger, in this issue]. Quasi’s “moods” depend on what?s been happening in his environment, but rather than being driven by prepubescent biology, they are architected by an elaborately scripted software-based behavioral model that triggers his responses. Quasi lets you know how he’s “feeling” through the changing colors of his LED eyes and his body language.

Other technologies are emulating more straightforward human traits. In the 9 June issue of Science, Vivek Maheshwari and Ravi F. Saraf of the University of Nebraska-Lincoln described their invention of a sensor that could allow robots to perceive temperature, pressure, and texture with exquisite sensitivity. Their sensor can detect surface details to within a pressure of about 10 kilopascals and distinguish features as small as 40 micrometers across?a sensitivity comparable to that of a human finger.

The Nebraska team is working on medical applications for the sensor. But it’s the idea of covering portions of a robot’s surface, particularly its “hands,” with these sensors that’s been making headlines.

Right now there are robots with increasingly sophisticated perceptual abilities and small behavioral repertoires operating in real-life environments. There are underwater vehicles that can map large swathes of sea bottom with total autonomy. There are computers operating on big problems at blazing computational speeds. But we still seem to be far away from that moment when our computational devices become autonomous entities with minds and brains–or the machine equivalent–of their own.

People have speculated about such a moment for decades, and most recently, ideas surrounding the questions of whether and when machine intelligence could equal and then surpass our own biological braininess have been subsumed into something called the Singularity. Popularized by science-fiction author and computer scientist Vernor Vinge in a 1983 article in Omni magazine, it has its early roots in the ideas of such cyberneticists as John von Neumann and Alan Turing. Notions about the Singularity–when it will happen, how it will happen, what it means for human beings and human civilization–come in several flavors. Its most well-known champions are roboticist Hans Moravec and computer scientist Raymond Kurzweil, who argue that when machine sapience kicks in, the era of human supremacy will be over. But it will be a good-news/bad-news situation: Moravec sees an era of indulgent leisure and an end to poverty and want; Kurzweil looks forward to uploading his brain into a computer memory and living on, in effect, indefinitely. But ultimately there’s also a good chance we’ll be booted off our little planet. Moravec goes so far as to predict that this massive machine intelligence will absorb the entire universe and everything in it, and that we will become part of the contents of this greater-than-human intelligence?s infinite knowledge database.

How would it work? According to Vinge’s vision, once computer performance and storage capacity rival those of animals–a phase we are beginning to enter–superhumanly intelligent machines capable of producing ever more intelligent machines will simply take over. This intellectual runaway, writes Vinge, “will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected–perhaps even to the researchers involved. (‘But all our previous models were catatonic! We were just tweaking some parameters….’) If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.”

Some thinkers dismiss the Singularity as “the rapture of the nerds.” Others believe it’s just a matter of time. Picking up on the good-news/bad-news theme, the Institute for the Future’s Paul Saffo has remarked: “If we have superintelligent robots, the good news is that they will view us as pets; the bad news is they will view us as food.”

 

 

 

{module Phoca Facebook Comments|none}

.

Intel Core i7 Review: Nehalem Gets Real

 

Nehalem is here.

Anticipation for Intel’s latest CPU architecture rivals the intensity for the original Core 2 Duo. It’s not just that Nehalem is a new CPU architecture. Intel’s new CPU line also brings along with it a new system bus, new chipsets, and a new socket format.

Today, we’re mainly focusing on the Core i7 CPU and its performance compared to Intel’s Core 2 quad-core CPUs. There’s a ton of data to sift through just on CPU performance. We’ll have ample opportunity to dive into the platform, and its tweaks, in future articles.

Intel will be launching three new Core i7 products in the next couple of weeks, at 2.66GHz, 2.93GHz, and 3.20GHz, at prices ranging from $285 to $999 (qty. 1,000). That’s right: You’ll be able to pick up a Core i7 CPU for around $300 fairly soon. Of course, that’s not the whole story: You’ll need a new motherboard and very likely, new memory, since the integrated memory controller only supports DDR3.

In the past several weeks, we’ve been locked in the basement lab, running a seemingly endless series of benchmarks on six different CPUs. Now it’s time to talk results. While we’ll be presenting our usual stream of charts and numbers, we’ll try to put them in context, including discussions of how and when it might be best to upgrade.

Let’s get started with a peek under the hood.

Core i7 Genesis
The Core i7 is Intel’s first new CPU architecture since the original Core 2 shipped back in July, 2006. It’s hard to believe that the first Core 2 processors shipped over two years ago.

Since then, Intel has shipped incremental updates to the product line. Quad-core Core 2 CPUs arrived in November 2006, in the form of the QX6700. AMD was quick to point out that Intel’s quad-core solutions weren’t “true” quad-core processors, but consisted of two Core 2 Duo dies in a single package. Despite that purist objection, Intel’s quad-core solutions proved highly successful in the market.

The original Core 2 line was built on a 65nm manufacturing process. In late 2007, Intel began shipping 45nm CPUs, code-named Penryn. Intel’s 45nm processors offered a few incremental feature updates, but were basically continuations of the Core 2 line.

In the past year, details about Nehalem began dribbling out, culminating with full disclosure of the Core i7 architecture at the August, 2008 Intel Developer Forum. If you want more details about Nehalem’s architecture, that article is well worth a read. However, we’ll touch on a few highlights now.

Cache and Memory
The initial Core i7 CPUs will offer a three-tiered cache structure. Each individual core contains two caches: a 64K L1 cache (split into a 32K instruction cache and a 32K data cache), plus a 256K unified L2 cache. An 8MB L3 cache is shared among the four cores. That 256K L2 cache is interesting, because it’s built with an 8-T (eight transistors per cell) SRAM structure. This facilitates running at lower voltages, but also takes up more die space. That’s one reason the core-specific L2 cache is smaller than you might otherwise expect.

Like AMD’s current CPU line, Nehalem uses an integrated, on-die memory controller. Intel has finally moved the memory controller out of the north bridge. The current memory controller supports only DDR3 memory. The new controller also supports three channels of DDR3 per socket, with up to three DIMMs per channel supported. Earlier, MCH-style memory controllers only supported two channels of DRAM.

The use of triple-channel memory mitigates the relatively low, officially supported DDR3 clock rate of 1066MHz (effective.) In conversations with various Intel representatives, they were quick to point out that three channels of DDR3-1066 equates to 30GB/sec of memory bandwidth

The integrated memory controller also clocks higher than one built into a north bridge chip, although not necessarily at the full processor clock speed. This higher clock, plus the lack of having to communicate over a north bridge link, substantially improves memory latency.

To facilitate the integrated memory controller, Intel developed a new, point-to-point system connect, similar in concept to AMD’s HyperTransport. Known as QuickPath Interconnect or QPI for short, the new interconnect can move data at peak rates of 25GB/sec (at a 6.4 gigatranfers per second base). Note that not all Nehalem processors will support the full theoretical bandwidth. The Core i7 940 and 920 CPUs support the 4.8 gigatransfer per second base rate, with a maximum throughput of 19.2GB/sec per channel. That’s still more than enough bandwidth for three DDR3-1066 memory channels.

Improvements to the Base Core Architecture
Core i7 boasts a substantial set of enhancements over the original Core 2 architecture, some of which are more subtle than others.

Let’s run down some of the more significant enhancements, in no particular order.

* The Return of Hyper-Threading—Core i7 now implements Hyper-Threading, Intel’s version of simultaneous multithreading. Each processor core can handle two simultaneous execution threads. Intel added processor resources, including deeper buffers, to enable robust SMT support. Load buffers have been increased from 32 (Core 2) to 48 (Core i7), while the number of store buffers went from 20 to 32.
* New SSE4.2 instructions—Intel enhanced SSE once again, by adding instructions that can help further speed up media transcoding and 3D graphics.
* Fast, unaligned cache access—Before Nehalem, data needed to be aligned on cache line boundaries for maximum performance. That’s no longer true with Nehalem. This will help newer applications written for Nehalem, more than older ones, only because compilers and application authors often took great care to align data along cache line boundaries.
* Advanced Power Management—The Core i7 actually contains another processor core, much tinier than the main cores. This is the power management unit, and is a dedicated microcontroller on the Nehalem die that’s not accessible from the outside world. Its sole purpose is to manage the power envelope of Nehalem. Sensors built into the main cores monitor thermals, power and current, optimizing power delivery as needed. Nehalem is also engineered to minimize idle power. For example, Core i7 implements a per core C6 sleep state.
* Turbo Mode—One interesting aspect of Core i7’s power management is Turbo Mode (not to be confused with Turbo Cache). Turbo mode is a sort of automatic overclocking features, in which individual cores can be driven to higher clock frequencies as needed. Turbo Mode is treated as another sleep state by the power management unit, and operates transparently to the OS and the user.

 

 

 

 

{module Phoca Facebook Comments|none}

 

.

Monkey’s Brain Can “Plug and Play” to Control Computer With Thought

 

Monkey’s Brain Can “Plug and Play” to Control Computer With Thought

Monkey's Brain Can "Plug and Play" to Control Computer With Thought
21 nov 2009—Our brains have a remarkable ability to assimilate motor skills that allow us to perform a host of tasks almost automatically—driving a car, riding a bicycle, typing on a keyboard. Now add another to the list: operating a computer using only thoughts.

Researchers at the University of California, Berkeley, have demonstrated how rhesus monkeys with electrodes implanted in their brains used their thoughts to control a computer cursor. Once the animals had mastered the task, they could repeat it proficiently day after day. The ability to repeat such feats is unprecedented in the field of neuroprosthetics. It reflects a major finding by the scientists: A monkey’s brain is able to develop a motor memory for controlling a virtual device in a manner similar to the way it creates such a memory for the animal’s body.

The new study, which should apply to humans, provides hope that physically disabled people may one day be able to operate advanced prosthetics in a natural, effortless way. Previous research in brain-machine interfaces, or BMIs, had already shown that monkeys and humans could use thought to control robots and computers in real time. But subjects weren’t able to retain the skills from one session to another, and the BMI system had to be recalibrated every session. In this new study, monkey do, monkey won’t forget.

”Every day we just put the monkeys to do the task, and they immediately recalled how to control the device,” says Jose Carmena, an IEEE senior member and professor of electrical engineering, cognitive science, and neuroscience who led the study. ”It was like ’plug and play.’”

Carmena and Karunesh Ganguly, a postdoc in Carmena’s lab, describe their work in a paper today in PLoS Biology.

The findings may ”change the whole way that people have thought about how to approach brain-machine interfaces,” says Lena Ting, a professor of biomedical engineering at Emory University and the Georgia Institute of Technology, in Atlanta. Previous research, she explains, tried to use the parts of the brain that operate real limbs to control an artificial one. The Berkeley study suggests that an artificial arm may not need to rely on brain signals related to the natural arm; the brain can assimilate the artificial device as if it were a new part of the body.

Krishna Shenoy, head of the Neural Prosthetic Systems Laboratory, at Stanford University, says the study is ”beautiful,” adding that the ”day-over-day learning is impressive and has never before been demonstrated so clearly.”

At the heart of the findings is the fact that the researchers used the same set of neurons throughout the three-week-long study. Keeping track of the same neurons is difficult, and previous experiments had relied on varying groups of neurons from day to day.

The Berkeley researchers implanted arrays of microelectrodes on the primary motor cortex, about 2 to 3 millimeters deep into the brain, tapping 75 to 100 neurons. The procedure was similar to that of other groups. The difference was that here the scientists carefully monitored the activity of these neurons using software that analyzed the waveform and timing of the signals. When they spotted a subset of 10 to 40 neurons that didn’t seem to change from day to day, they’d start the experiment; several times, one or more neurons would stop firing, and they’d have to restart from scratch. But the persistence paid off.

Monitoring the neurons, the scientists placed the monkey’s right arm inside a robotic exoskeleton that kept track of its movement. On a screen, the monkey saw a cursor whose position corresponded to the location of its hand. The task consisted of moving the cursor to the center of the screen, waiting for a signal, and then dragging the cursor onto one of eight targets in the periphery. Correct maneuvers were rewarded with sips of fruit juice. While the animal played, the researchers recorded two data sets—the brain signals and corresponding cursor positions.

Photobucket
BRAIN POWER: During manual control [left], the monkey maneuvers the computer cursor while the researchers record the neuronal activity, used to create a decoder. Under brain control [right], the researchers feed the neuronal signals into the decoder, which then controls the cursor.

The next step was to determine whether the animal could perform the same task using only its brain. To find out, the researchers needed first to create a decoder, a mathematical model that translates brain activity into cursor movement. The decoder is basically a set of equations that multiply the firing rates of the neurons by certain numbers, or weights. When the weights have the right values, you can plug the neuronal data into the equations and they’ll spill out the cursor position. To determine the right weights, the researchers had only to correlate the two data sets they’d recorded.

Next the scientists immobilized the monkey’s arm and fed the neuronal signals measured in real time into the decoder. Initially, the cursor moved spastically. But over a week of practice, the monkey’s performance climbed to nearly 100 percent and remained there for the next two weeks. For those later sessions, the monkey didn’t have to undergo any retraining—it promptly recalled how to skillfully maneuver the cursor.

The explanation lies in the behavior of the neurons. The researchers observed that the set of neurons they were monitoring would constantly fire while the animal was in its cage or even sleeping. But when the BMI session began, the neurons quickly locked into a pattern of activity—known as a cortical map—for controlling the cursor. (The researchers replicated the experiment with another monkey.)

The study is a big improvement over early experiments. In past studies, because researchers didn’t keep track of the same set of neurons, they had to reprogram the decoder every time to adapt to the new cortical activity. The changes also meant that the brain couldn’t form a cortical map of the prosthetic device. That limitation raised questions about whether paralyzed people would be able to use prosthetics with enough proficiency to make them really useful.

The Berkeley scientists showed that the cortical map can be stable over time and readily recalled. But they also demonstrated a third characteristic.

”These cortical maps are robust, resistant to interference,” says Carmena. ”When you learn to play tennis, that doesn’t make you forget how to drive a car.”

To demonstrate that, the researchers taught the monkey how to use a second decoder. To create the new decoder, they again recorded neuronal activity while the animal manually moved the cursor using the exoskeleton arm. The new data sets contained small fluctuations compared with the original ones, resulting in different weights for the equations. Using a new decoder is analogous to giving a different racket to a tennis player, who needs some practice to get accustomed to it.

As expected, the monkey’s performance was poor at first, but after just a few days it reached nearly 100 percent. What’s more, the researchers could now switch back and forth between the old and new decoders. All the animal saw was that the cursor changed color, and its brain would promptly produce the signals needed. This wasn’t a one-trick monkey, so to speak.

But perhaps more surprising, the researchers also tested a shuffled decoder. They took one of the existing decoders and randomly redistributed the weights in the equations among the neurons. This meant that the new decoder, unlike the early ones, had no relationship to actual movements of the monkey’s arm. It was a bit like giving a tennis player a hammer instead of a different racket.

What followed was a big surprise: After about just three days, the monkeys learned the new decoder. Just as before, practice allowed the neurons to develop a cortical map for the new task.

”It’s pretty remarkable that it could adapt to basically a corrupted decoder,” says Nicho Hatsopoulos, a professor of computational neuroscience at the University of Chicago. He says there’s a lot of focus in the field to build better and better decoders, but the new results suggest it may not be that important ”because the monkey will learn to improve its own performance.”

Carmena believes that the brain’s ability to store prosthetic motor memories is a key step toward practical BMI systems. Yet he emphasizes that it’s hard to predict when this technology will become available and is careful not to give patients false expectations. He says that the improvements needed include making the BMI systems less invasive and able to incorporate more than just visual feedback, with prosthetics that can provide users with tactile information, for example.

Still, he knows where he wants to go.

”I have this idea for a long-term goal,” he says. ”Can you tie your shoe in BMI mode?”

 

 

 

{module Phoca Facebook Comments|none}

.