Technology Articles

17 posts

Robocopters to the Rescue

The next medevac helicopter won’t need a pilot

[youtube id=”i8yV5D8Cpoc” width=”600″ height=”350″]

An autonomous helicopter picks a landing site: See how the system sees and evaluates the terrain below

We’re standing on the edge of the hot Arizona tarmac, radio in hand, holding our breath as the helicopter passes 50 meters overhead. We watch as the precious sensor on its blunt nose scans every detail of the area, the test pilot and engineer looking down with coolly professional curiosity as they wait for the helicopter to decide where to land. They’re just onboard observers. The helicopter itself is in charge here.

Traveling at 40 knots, it banks to the right. We smile: The aircraft has made its decision, probably setting up to do a U-turn and land on a nearby clear area. Suddenly, the pilot’s voice crackles over the radio: “I have it!” That means he’s pushing the button that disables the automatic controls, switching back to manual flight. Our smiles fade. “The aircraft turned right,” the pilot explains, “but the test card said it would turn left.”

The machine would have landed safely all on its own. But the pilot could be excused for questioning its, uh, judgment. For unlike the autopilot that handles the airliner for a good portion of most commercial flights, the robotic autonomy package we’ve installed on Boeing’s Unmanned Little Bird (ULB) helicopter makes decisions that are usually reserved for the pilot alone. The ULB’s standard autopilot typically flies a fixed route or trajectory, but now, for the first time on a full-size helicopter, a robotic system is sensing its environment and deciding where to go and how to react to chance occurrences.

It all comes out of a program sponsored by the Telemedicine & Advanced Technology Research Center, which paired our skills, as roboticists fromCarnegie Mellon University, with those of aerospace experts from Piasecki Aircraft and Boeing. The point is to bridge the gap between the mature procedures of aircraft design and the burgeoning world of autonomous vehicles. Aerospace, meet robotics.

The need is great, because what we want to save aren’t the salaries of pilots but their lives and the lives of those they serve. Helicopters are extraordinarily versatile, used by soldiers and civilians alike to work in tight spots and unprepared areas. We rely on them to rescue people from fires, battlefields, and other hazardous locales. The job of medevac pilot, which originated six decades ago to save soldiers’ lives, is now one of the most dangerous jobs in America, with 113 deaths for every 100 000 employees. Statistically, only working on a fishing boat is riskier.

These facts raise the question: Why are helicopters such a small part of the boom in unmanned aircraft? Even in the U.S. military, out of roughly 840 large unmanned aircraft, fewer than 30 are helicopters. In Afghanistan, the U.S. Marines have used two unmanned Lockheed Martin K-Max helicopters to deliver thousands of tons of cargo, and the Navy has used some of its 20-odd shipborne Northrop Grumman unmanned Fire Scout helicopters to patrol for pirates off the coast of Africa.

10RobocopterKMax10RobocopterGrumman
Photos: Left: Northrop Grumman Corp.; Above: Lockheed Martin
The U.S. Navy’s carrier-based copter, the Northrop Grumman Fire Scout [left], can fly itself onto and off of a moving deck. The U.S. Marine Corps’s K-Max ferries cargo to soldiers, dangling a “sling load” to work around its weakness in landing on rough ground. To rescue people, however, a robocopter needs better eyes and judgment.

So what’s holding back unmanned helicopters? What do unmanned airplanes have that helicopters don’t?

It’s fine for an unmanned plane to fly blind or by remote control; it takes off, climbs, does its work at altitude, and then lands, typically at an airport, under close human supervision. A helicopter, however, must often go to areas where there are either no people at all or no specially trained people—for example, to drop off cargo at an unprepared area, pick up casualties on rough terrain, or land on a ship. These are the scenarios in which current technology is most prone to fail, because unmanned aircraft have no common sense: They do exactly as they are told.

If you absentmindedly tell one to fly through a tree, it will attempt to do it. One experimental unmanned helicopter nearly landed on a boulder and had to be saved by the backup pilot. Another recently crashed during the landing phase. To avoid such embarrassments, the K-Max dangles cargo from a rope as a “sling load” so that the helicopter doesn’t have to land when making a delivery. Such work-arounds throw away much of the helicopter’s inherent advantage. If we want these machines to save lives, we must give them eyes, ears, and a modicum of judgment.

In other words, an autonomous system needs perception, planning, and control. It must sense its surroundings and interpret them in a useful way. Next, it must decide which actions to perform in order to achieve its objectives safely. Finally, it must control itself so as to implement those decisions.

A cursory search on YouTube will uncover videos of computer-controlled miniature quadcopters doing flips, slipping vertically through slots in a wall, and assembling structures. What these craft are missing, though, is perception: They perform inside the same kind of motion-capture lab that Hollywood uses to record actors’ movements for computer graphics animations. The position of each object is precisely known. The trajectories have all been computed ahead of time, then checked for errors by software engineers.

If you give such a quadcopter onboard sensors and put it outdoors, away from the rehearsed dance moves of the lab, it becomes much more hesitant. Not only will it sense its environment rather poorly, but its planning algorithms will barely react in time when confronted with an unusual development.

True, improvements in hardware are helping small quadcopters approach full autonomy, and somewhat larger model helicopters are already quite far along in that quest. For example, several groups have shown capabilities such as automated landing, obstacle avoidance, and mission planning on the Yamaha RMax, a 4-meter machine originally sold for remote-control crop dusting in Japan’s hilly farmlands. But this technology doesn’t scale up well, mainly because the sensors can’t see far enough ahead to manage the higher speeds of full-size helicopters. Furthermore, existing software can’t account for the aerodynamic limitations of larger craft.

Another problem with the larger helicopters is that they don’t actually like to hover. A helicopter typically lands more like an airplane than most people realize, making a long, descending approach at a shallow angle at speeds of 40 knots (75 kilometers per hour) or more and then flaring to a hover and vertical descent. This airplane-like profile is necessary because hovering sometimes requires more power than the engines can deliver.

The need for fast flying has a lot to do with the challenges of perception and planning. We knew that making large, autonomous helicopters practical would require sensors with longer ranges and faster measurement rates than had ever been used on an autonomous rotary aircraft, as well as software optimized to make quick decisions. To solve the first problem, we began with ladar—laser detection and ranging—a steadily improving technology and one that’s already widely used in robotic vehicles.

Ladar measures the distance to objects by first emitting a tightly focused laser pulse and then measuring how long it takes for any reflections to return. It creates a 3-D map of the surroundings by pulsing 100 000 times per second, steering the beam to many different points with mirrors, and combining the results computationally.

The ladar system we constructed for the ULB uses a “nodding” scanner. A “fast-axis” mirror scans the beam in a horizontal line up to 100 times per second while another mirror nods up and down much more slowly. To search for a landing zone, the autonomous system points the ladar down and uses the fast-axis line as a “push broom,” surveying the terrain as the helicopter flies over it. When descending nearer to a possible landing site, the system points the ladar forward and nods up and down, thus scanning for utility wires or other low-lying obstacles.

Because the helicopter is moving, the ladar measures every single point from a slightly different position and angle. Normally, vibration would blur these measurements, but we compensate for that problem by matching the scanned information with the findings of an inertial navigation system, which uses GPS, accelerometers, and gyros to measure position within centimeters and angles within thousandths of a degree. That way, we can properly place each ladar-measured reflection on an internal map.

10 Robocopter Topography 
Illustration: Emily Cooper
Topography for Robots: The unmanned helicopter can land in an unprepared site to evacuate a patient. Using a ladar system, the craft can construct maps, select landing sites, and generate safe approaches to them, giving itself a fallback option in case the situation changes.

To put this stream into a form the planning software can use, the system constantly updates two low-level interpretations. One is a high-resolution, two-dimensional mesh that encodes the shape of the terrain for landing; the other is a medium-resolution, three-dimensional representation of all the things the robot wants to avoid hitting during its descent. Off-the-shelf surveying software can create such maps, but it may take hours back in the lab to process the data. Our software creates and updates these maps essentially as fast as the data arrive.

The system evaluates the mesh map by continually updating a list of numerically scored potential landing places. The higher the score, the more promising the landing site. Each site has a set of preferred final descent paths as well as clear escape routes should the helicopter need to abort the attempt (for example, if something gets in the way). The landing zone evaluator makes multiple passes on the data, refining the search criteria as it finds the best locations. The first pass quickly eliminates areas that are too steep or rough. The second pass places a virtual 3-D model of the helicopter in multiple orientations on the mesh map of the ground to check for rotor and tail clearance, good landing-skid contact, and the predicted tilt of the body on landing.

In the moments before the landing, the autonomous system uses these maps to generate and evaluate hundreds of potential trajectories that could bring the helicopter from its current location down to a safe landing. The trajectory includes a descending final approach, a flare—the final pitch up that brings the helicopter into a hover—and the touchdown. Each path is evaluated for how close it comes to objects, how long it would take to fly, and the demands it would place on the aircraft’s engine and physical structure. The planning system picks the best combination of landing site and trajectory, and the path is sent to the control software, which actually flies the helicopter. Once a landing site is chosen, the system continuously checks its plan against new data coming from the ladar and makes adjustments if necessary.

That’s how it worked in simulations. The time had come to take our robocopter out for a spin.

So it was that we found ourselves on a sunny spring afternoon in Mesa, Ariz. Even after our system had safely landed itself more than 10 times, our crew chief was skeptical. He had spent decades as a flight-test engineer at Boeing and had seen many gadgets and schemes come and go in the world of rotorcraft. So far, the helicopter had landed itself only in wide-open spaces, and he wasn’t convinced that our system was doing anything that required intelligence. But today was different: Today he would match wits with the robot pilot.

Our plan was to send the ULB in as a mock casualty evacuation helicopter. We’d tell it to land in a cleared area and then have it do so again after we’d cluttered up the area. The first pass went without a hitch: The ULB flew west to east as it surveyed the landing area, descended in a U-turn, completed a picture-perfect approach, and landed in an open area close to where the “casualty” was waiting to be evacuated. Then our crew chief littered the landing area with forklift pallets, plastic boxes, and a 20-meter-high crane.

This time, after the flyover the helicopter headed north instead of turning around. The test pilot shook his head in disappointment and prepared to push the button on his stick to take direct control. But the engineer seated next to him held her hand up. After days of briefings on the simulator, she had begun to get a feel for the way the system “thought,” and she realized that it might be trying to use an alternative route that would give the crane a wider berth. And indeed, as the helicopter descended from the north, it switched the ladar scanner from downward to forward view, checking for any obstacles such as power lines that it wouldn’t have seen in the east-west mapping pass. It did what it needed to do to land near the casualty, just as it had been commanded.

This landing was perfect, except for one thing: The cameras had been set up ahead of time to record an approach from the east rather than the north. We’d missed it! So our ground crew went out and added more clutter to try to force the helicopter to come in from the east but land further away from the casualty. Again the helicopter approached from the north and managed to squeeze into a tighter space nearby, keeping itself close to the casualty. Finally, the ground crew drove out onto the landing area, intent on blocking all available spaces and forcing the machine to land from the east. Once again the wily robot made the approach from the north and managed to squeeze into the one small (but safe) parking spot the crew hadn’t been able to block. The ULB had come up with perfectly reasonable solutions—solutions we had deliberately tried to stymie. As our crew chief commented, “You could actually tell it was making decisions.”

That demonstration program ended three years ago. Since then we’ve launched a spin-off company, Near Earth Autonomy, which is developing sensors and algorithms for perception for two U.S. Navy programs. One of these programs, the Autonomous Aerial Cargo/Utility System (AACUS), aims to enable many types of autonomous rotorcraft to deliver cargo and pick up casualties at unprepared landing sites; it must be capable of making “hot” landings, that is, high-speed approaches without precautionary overflight of the landing zone. The other program will develop technology to launch and recover unmanned helicopters from ships.

It took quite a while for our technology to win the trust of our own professional test team. We must clear even higher hurdles before we can get nonspecialists to agree to work with autonomous aircraft in their day-to-day routines. With that goal in view, the AACUS program calls for simple and intuitive interfaces to allow nonaviator U.S. Marines to call in for supplies and work with the robotic aircraft.

In the future, intelligent aircraft will take over the most dangerous missions for air supply and casualty extraction, saving lives and resources. Besides replacing human pilots in the most dangerous jobs, intelligent systems will guide human pilots through the final portions of difficult landings, for instance by sensing and avoiding low-hanging wires or tracking a helipad on the pitching deck of a ship. We are also working on rear-looking sensors that will let a pilot keep constant tabs on the dangerous rotor at the end of a craft’s unwieldy tail.

Even before fully autonomous flight is ready for commercial aviation, many of its elements will be at work behind the scenes, making life easier and safer, just as they are doing now in fixed-wing planes and even passenger cars. Robotic aviation will not come in one fell swoop—it will creep up on us.

 .

Long-Distance Quantum Cryptography

A hybrid system could secure transmissions over hundreds of kilometers

Using the quirky laws of quantum physics to encrypt data, in theory, assures perfect security. But today’squantum cryptography can secure point-to-point connections only about 100 kilometers apart, greatly limiting its appeal.

Battelle Memorial Institute, an R&D laboratory based in Columbus, Ohio, is now building a “quasi-quantum” network that will break through that limit. It combines quantum and classical encryption to make a network stretching hundreds of kilometers with security that’s a step toward the quantum ideal.

“In a few years, our networks aren’t going to be very secure,” says Don Hayford, senior research leader in Battelle’s national security global business. Cryptography relies on issuing a secret key to unlock the contents of an encrypted message. One of the long-standing worries is that sufficiently powerful computers, or eventually quantum computers, could decipher the keys. “We looked at this and said, ‘Somebody needs to step up and do it,’ ” Hayford says.

By the end of next year, Battelle plans to have a ring-shaped network connecting four of its locations around Columbus—some of which transmit sensitive defense contract information—that will be protected using quantum key distribution, or QKD. If that smaller network is successful, Battelle then plans to connect to its offices in the Washington, D.C., area—a distance of more than 600 km—and potentially offer QKD security services to customers in government or finance over that network.

Quantum cryptography uses physics, specifically the quantum properties of light particles, to secure communications. It starts with a laser that generates photons and transmits them through a fiber-optic cable. The polarization of photons—whether they’re oscillating horizontally or vertically, for example—can be detected by a receiver and read as bits, which are used to generate the same “one-time pad” encryption key at both ends of the fiber. (A one-time pad is an encryption key that consists of a long set of random numbers, and so the message it hides also appears to be a random set of numbers.) Messages can then be sent securely between the sender and receiver by any means—even carrier pigeon—so long as they are encrypted using the key. If someone tries to intercept the key by measuring the state of the photons or by reproducing them, the system will be able to detect the intrusion and the keys will be thrown out.

Over long distances, though, light signals fade, and keys can’t be distributed securely. Ideally, “quantum repeaters” would store and retransmit photons, but such devices are still years away, say experts. Battelle’s approach is essentially to daisy-chain a series of QKD nodes and use classical encryption to bridge the gaps. Locations less than 100 km away will be connected by fiber-optic links and the data secured by a QKD system from Geneva-based ID Quantique. For two more-distant nodes (call them A and C) to communicate, there must be a “trusted node” between them (call it B). Nodes A and B can share a key by quantum means. Nodes B and C can also share a separate key by quantum means. So for A and C to communicate securely, A’s key must be sent to C under the encryption that B and C share. You might think the quantum-to-classical stopover in the trusted node might be a weak point, but even inside that node, keys are protected using one-time pad encryption, says Grégoire Ribordy, the CEO and cofounder of ID Quantique. The trusted node will also be located at a secure site and have other measures to prevent tampering.

These nodes, which are still under development, will be designed to integrate with corporate security systems, distributing keys for virtual private networks or database security within a building. “The idea is to set up a network which would be dedicated to cryptography-key management,” says Ribordy. ID Quantique’s gear will do the quantum key exchange, while Battelle will build the trusted nodes.

Researchers also hope to treat satellites in space as trusted nodes and to send photons through the air, rather than over optical-fiber links. In the nearer term, though, Battelle’s land-based QKD network may be the most viable approach to introducing quantum encryption into today’s networks. Yet it still faces significant challenges. For starters, the cost of point-to-point QKD is about 25 to 50 percent more than for classical encryption, says Ribordy, and connecting locations hundreds of kilometers apart would require multiple systems. That means Battelle will need to find a customer with an application that warrants the added expense. Verizon Communications, which offers network security services, tested QKD from 2005 to 2006, but it determined there wasn’t a viable business case because of distance limitations and the limited market for the technology.

Also, QKD hardware can’t easily plug into the existing telecom hardware, says Duncan Earl, chief technology officer of GridCom Technologies, which plans to use QKD for electricity grid control networks. Established networks have routers and switches that would ruin the key distribution’s delicate physics.

On a technical level, though, the work really only requires good engineering, not scientific breakthroughs, says Hayford. And the hybrid approach can accommodate future advances in quantum cryptography, such as quantum repeaters. Given the growing concerns over cybersecurity, it’s better to test the worth of quantum encryption sooner rather than later, he says.

Source : IEEE Spectrum August 2013

.

Millimeter Waves May Be the Future of 5G Phones

Samsung’s millimeter-wave transceiver technology could enable ultrafast mobile broadband by 2020

Clothes, cars, trains, tractors, body sensors, and tracking tags. By the end of this decade, analysts say, 50 billion things such as these will connect to mobile networks. They’ll consume 1000 times as much data as today’s mobile gadgets, at rates 10 to 100 times as fast as existing networks can support. So as carriers rush to roll out 4G equipment, engineers are already beginning to define a fifth generation of wireless standards.

What will these “5G” technologies look like? It’s too early to know for sure, but engineers at Samsung and at New York University say they’re onto a promising solution. The South Korea–based electronics giant generated some buzz when it announced a new 5G beam-forming antenna that could send and receive mobile data faster than 1 gigabit per second over distances as great as 2 kilometers. Although the 5G label is premature, the technology could help pave the road to more-advanced mobile applications and faster data transfers.

Samsung’s technology is appealing because it’s designed to operate at or near “millimeter-wave” frequencies (3 to 300 gigahertz). Cellular networks have always occupied bands lower on the spectrum, where carrier waves tens of centimeters long (hundreds of megahertz) pass easily around obstacles and through the air. But this coveted spectrum is heavily used, making it difficult for operators to acquire more of it. Meanwhile, 4G networks have just about reached the theoretical limit on how many bits they can squeeze into a given amount of spectrum.

So some engineers have begun looking toward higher frequencies, where radio use is lighter. Engineers at Samsung estimate that government regulators could free as much as 100 GHz of millimeter-wave spectrum for mobile communications—about 200 times what mobile networks use today. This glut of spectrum would allow for larger bandwidth channels and greater data speeds.

Wireless products that use millimeter waves already exist for fixed, line-of-sight transmissions. And a new indoor wireless standard known as WiGig will soon allow multigigabit data transfers between devices in the same room. But there are reasons engineers have long avoided millimeter waves for broader mobile coverage.

07NSamsungG5 illustration 
5g Beam Scheme: Steerable millimeter-wave beams could enable multigigabit mobile connections. Phones at the edge of a 4G cell [blue] could use the beams to route signals around obstacles. Because the beams wouldn’t overlap, phones could use the same frequencies [pink] without interference. Phones near the 4G tower could connect directly to it [green].

For one thing, these waves don’t penetrate solid materials very well. They also tend to lose more energy than do lower frequencies over long distances, because they are readily absorbed or scattered by gases, rain, and foliage. And because a single millimeter-wave antenna has a small aperture, it needs more power to send and receive data than is practical for cellular systems.

Samsung’s engineers say their technology can overcome these challenges by using an array of multiple antennas to concentrate radio energy in a narrow, directional beam, thereby increasing gain without upping transmission power. Such beam-forming arrays, long used for radar and space communications, are now being used in more diverse ways. The Intellectual Ventures spin-off Kymeta, for instance, is developing metamaterials-based arrays in an effort to bring high-speed satellite broadband to remote or mobile locations such as airplanes.

Samsung’s current prototype is a matchbook-size array of 64 antenna elements connected to custom-built signal-processing components. By dynamically varying the signal phase at each antenna, this transceiver generates a beam just 10 degrees wide that it can switch rapidly in any direction, as if it were a hyperactive searchlight. To connect with one another, a base station and mobile radio would continually sweep their beams to search for the strongest connection, getting around obstructions by taking advantage of reflections.

“The transmitter and receiver work together to find the best beam path,” says Farooq Khan, who heads Samsung’s R&D center in Dallas. Khan and his colleagues Zhouyue Pi and Jianzhong Zhang filed the first patent describing a millimeter-wave mobile broadband system in 2010. Although the prototype revealed this year is designed to work at 28 GHz, the Samsung engineers say their approach could be applied to most frequencies between about 3 and 300 GHz. “Our technology is not limited to 28 GHz,” Pi says. “In the end, where it can be deployed depends on spectrum availability.”

In outdoor experiments near Samsung’s Advanced Communications Lab, in Suwon, South Korea, a prototype transmitter was able to send data at more than 1 Gb/s to two receivers moving up to 8 kilometers per hour—about the speed of a fast jog. Using transmission power “no higher than currently used in 4G base stations,” the devices were able to connect up to 2 km away when in sight of one another, says Wonil Roh, who heads the Suwon lab. For non-line-of-sight connections, the range shrank to about 200 to 300 meters.

Theodore Rappaport, a wireless expert at the Polytechnic Institute of NYU, has achieved similar results for crowded urban spaces in New York City and Austin, Texas. His NYU Wireless lab, which has received funding from Samsung, is working to characterize the physical properties of millimeterwave channels. In recent experiments, he and his students simulated beam-forming arrays using megaphone-like “horn” antennas to steer signals. After measuring path losses between two horn transceivers placed in various configurations, they concluded that a base station operating at 28 or 38 GHz could provide consistent signal coverage up to about 200 meters.

Millimeter-wave transceivers may not make useful replacements for current cellular base stations, which cover up to about a kilometer. But in the future, many base stations will likely be much smaller than today’s, Rappaport points out. Already carriers are deploying compact base stations, known as small cells, in congested urban areas to expand data capacity. Not only could millimeter-wave technology add to that capacity, he says, it could also provide a simple, inexpensive alternative to backhaul cables, which link mobile base stations to operators’ core networks.

“The beauty of millimeter waves is there’s so much spectrum, we can now contemplate systems that use spectrum not only to connect base stations to mobile devices but also to link base stations to other base stations or back to the switch,” Rappaport says. “We can imagine a whole new cellular architecture.”

Other wireless experts remain skeptical that millimeter waves can be widely used for mobile broadband. “This is still theoretical; it has to be proven,” says Afif Osseiran, a master researcher at Ericsson and project coordinator for theMobile and wireless communication Enablers for the Twenty-twenty Information Society (METIS). The newly formed consortium of European companies and universities is working to identify the most promising 5G solutions by early 2015.

Osseiran says METIS is considering a variety of technologies, including new data coding and modulation techniques, better interference management, densely layered small cells, multihop networks, and advanced receiver designs. He emphasizes that a key characteristic of 5G networks will be the use of many diverse systems that must work together. “Millimeter-wave technology is only one part of a bigger pie,” he says.

Source: IEEE Spectrum July 2013

.

The Smartest, Greenest Grid

On Christmas night, Maja Bendtsen and her husband were curled up on the couch watching TV in their cozy house on the Danish island of Bornholm. Suddenly the house lost power. “The lights flickered briefly and then everything went black,” Bendtsen recalls.

Peeking out the window, they saw that the whole neighborhood was dark. A few quick phone calls confirmed that all of Bornholm was without power. Bendtsen, an engineer with the island’s utility, Østkraft Net, mentally ruled out the obvious culprits: It wasn’t a particularly busy night, as Christmas festivities had wrapped up with the midday meal, nor was the weather particularly cold or stormy.

She thought of one thing, though, and it made her heart sink. She phoned the Østkraft control room, where the chief engineer confirmed her suspicion: A ship dragging its anchor in the narrow Baltic Sea channel between Bornholm and Sweden had severed the 60-kilovolt, 70-megawatt undersea power cablethat is the island’s only external source of electricity. It would take a repair crew more than six weeks to pinpoint the damage, haul the cable to the water’s surface, and fix it.

Incredibly, this was the fourth such mishap in 10 years. “We’re getting accustomed to it, almost,” Bendtsen says. By “accustomed” she doesn’t mean “resigned.” During the last decade, Østkraft has built up an impressive array of renewable sources [PDF] like wind, solar, and biomass, which can now supply about three-quarters of the island’s demand. In the process, Bornholm has transformed itself into a kind of living laboratory for testing new energy ideas.

Now it is taking the ultimate step, by deploying one of the world’s most advanced smart grids, called the EcoGrid EU. It’s a four-year, €21 million (US $27 million) project, funded in part by the European Union, that aims to demonstrate how electricity will be produced, distributed, and consumed in the future. While any smart grid today can track in excruciating detail electricity supply, demand, and other information, Bornholm’s is one of the first in which individual household consumption can respond to real-time price changes in the electricity market. By doing that, the grid’s customers are helping to balance the sometimes big and sudden swings in supply that inevitably accompany the use of wind and solar power.

05DenmarkSmartGridMap 
Illustration: Bryan Christie Design
Green Grid: The Danish island of Bornholm has only 41 000 full-time residents, but it now boasts one of the world’s most advanced smart grids, which should help optimize the operation of its diverse mix of energy sources, including wind, solar, and biomass, as well as traditional coal and diesel.

And as Bornholm goes, so goes Denmark and the rest of Europe. TheEuropean Commission’s 20/20/20 Plan, for instance, states that by the year 2020, greenhouse gas emissions will be cut by 20 percent, while renewable energy usage and energy efficiency will both rise by 20 percent. Last year, theDanish parliament approved an even more ambitious target: to have renewables supply 35 percent of the country’s total energy needs—not just electricity but also heating and transportation—by 2020, and an incredible 100 percent by 2050. Can those targets actually be reached?

05wdenmarkgrid02
Photo: Nicky Bonne

Lost at Sea:Bornholm’s only link to the Nordic grid is via a three-phase undersea cable, which has broken four times in the last 10 years.


That’s what the EcoGrid project aims to find out. The choice of Bornholm, with its 41 000 full-time residents, to host it was no accident. Although the island’s beauty draws hundreds of thousands of tourists every year, it’s not just a vacation destination. Commercial fishing, dairy farming, and arts and crafts all buttress the economy and give Østkraft a representative mixture of commercial, industrial, and residential customers, as well as schools, a hospital, an airport, and an international seaport.

“We’re like a microcosm of Danish society,” Bendtsen says. “We are in many senses a picture of the future power system in Denmark.” And by studying how a high-tech grid can help this little island cope with the challenges of renewable energy, EcoGrid’s organizers hope to discover larger lessons for the wider world.

Bornholm has long held a special place in the Danish psyche. According to local legend, when God got to the end of his creation he still had bits of paradise left over, and so he threw them all down in the Baltic Sea and created Bornholm. In medieval times, another tale goes, Danish kings hid their mistresses away in the island’s large forest. Today, Europeans flock to Bornholm in the summer for its beautiful sandy beaches, sunny (for Denmark) weather, and, yes, that forest.

05wdenmarkgrid03
Photo: Technical University of Denmark (DTU)
Island Oversight: Researchers at the Technical University of Denmark, outside Copenhagen, can monitor Bornholm’s power grid in real time.

For Jacob Østergaard, though, the most attractive thing about Bornholm isn’t the beaches or the sun: It’s that pesky undersea cable, or, more important, what that cable allows him as a power engineer to do. Østergaard, a professor of electrical engineering at the Technical University of Denmark (DTU), in Lyngby, is involved in a number of electricity projects on Bornholm, including EcoGrid. The cable can be switched off at will, he explains, putting the Bornholm grid into what’s known in electricity circles as “island” mode. And that’s interesting, he says, because the wealth of wind power makes the Bornholm grid challenging to operate and fascinating to study. Last year, he and his colleagues even built a duplicate of the Østkraft control room on the DTU campus to monitor the Bornholm grid in real time.

On a windy day, Bornholm’s turbines can supply up to 30 MW of power, or more than half of the island’s peak load of 55 MW. But the wind blows as it will, and that variability and unpredictability can wreak havoc on the grid’s stability. If the wind abruptly dies, for instance, electricity supply could dip way below demand, causing the grid’s nominal 50-hertz frequency to likewise plummet. A dip or a spike of just over a tenth of a hertz is cause for alarm, Østergaard says, and if it drifts out of kilter even further—to, say, 47 Hz—it can trigger a blackout.

Something close to that happened on 17 September 2009 [PDF], when the sea cable was shut down for maintenance. To keep the grid balanced, the wind turbines were also initially shut down. At 11:25 a.m., all was calm, with the grid frequency steadily hovering just north of 50 Hz. Then, at 11:26 a.m., six of the turbines were turned on, and over the next several minutes their share of the island’s power supply rose to 15 percent.

But as the wind output grew erratic, so did the grid frequency, spiking more than a tenth of a hertz several times and dropping sharply to 49.8 Hz just before noon. Østkraft engineers and DTU researchers were closely monitoring the situation and quickly stepped in, ramping up the output of the island’s conventional generators and dialing back the proportion of wind to 10 percent, at which point the frequency returned to normal.

05wdenmarkgrid04
Photo: Nicky Bonne
Fueling the Future: The main power plant on Bornholm burns wood chips in addition to coal and diesel.

Dozens of experiments before and since have confirmed that there’s an upper limit of about 15 percent on the amount of wind power that Bornholm’s grid can absorb when in island mode. And to greater or lesser degrees, all power grids that have a substantial amount of wind and solar do the same thing,falling back on traditional “peak” generators to compensate for gaps in renewable output. Some grid operators also store electricity in pumped hydroor compressed-air installations or in industrial-grade batteries, but the latter aren’t yet economical, and the former can be used only in certain locations.

But what if, instead of boosting generation when demand is high, you just cut back demand? Answering that basic question is at the heart of the EcoGrid.

The goal of the smart grid isn’t to demonstrate that Bornholm can be energy independent, notes Bendtsen, sitting in one of the light-filled offices at Østkraft’s sleek headquarters just outside the main town of Rønne. The island is independent already: At present it has about 50 MW of domestic capacity, from a mix of conventional coal and diesel generators, three dozen wind turbines that dot the countryside like giant pinwheels, rooftop photovoltaics, a biogas plant, and several wood-chip- and straw-fired plants. As a result, the Christmas night blackout lasted only a few hours, the time it took to bring the domestic plants online.

But producing electricity that way is expensive, and so the cable to Sweden lets the island buy electricity from the Nordic grid when it’s cheap and sell when the price is high. Ordinarily, trading in electricity markets is done at the level of utilities and the like. EcoGrid is letting individual households and smaller businesses also become market players.

The idea is to shift the consumption of electricity to periods of the day and night when electricity demand and prices are low, Bendtsen explains. You could do that by simply sending people a text message whenever prices change. But that would quickly get tiresome.

05wdenmarkgrid05
Photo: Nicky Bonne
From Waste to Watts: Bornholm’s 2-megawatt biogas plant [right] converts manure and other organic waste into electricity and heat.

“And if we let people interact directly with the market, their behavior will, of course, change,” says Østergaard. “Everyone will want to charge their electric vehicles when the price is low, for example. If too many people do that, you create congestion in the weakest parts of the grid.”

Instead, EcoGrid’s people have installed smart grid controllers [PDF] in about 1200 households and a hundred businesses, and since April the controllers have been receiving a continuous stream of data based on the 5-minute price for electricity in theNordic electricity market, which covers Denmark, Finland, Norway, and Sweden. The controllers wirelessly communicate with designated appliances, and algorithms determine whether to turn each one on or off, based on factors like the time of day, the weather, and current, past, and future market prices.

At first, the project’s organizers envisioned regulating a whole suite of household machines—dishwashers, washing machines, refrigerators, TVs, lights. It turns out, though, that although such smart appliances have been on the market for years, there’s still no standard protocol for automating them. So your dishwasher might speak ZigBee while your freezer converses in KNX, and they can’t easily understand each other.

Standards clearly would help, says Bendtsen. “Imagine that you go to a white goods store to buy a new dishwasher,” she says. “You have to consider not just what size and what color and how much energy and water does it use but which language does it speak. Fine if you’re an engineer, but we need some sort of standard so that ordinary people don’t have to think about all these things themselves.”

In the meantime, the EcoGrid is keeping things simple and dealing primarily with households that have electric heating systems and heat pumps. In 700 of those households, the heating system is directly controlled using algorithms developed at IBM’s research lab in Zurich. A thermal model of each household has been created, based on factors like electricity usage patterns and the size of the windows and walls, explains Dieter Gantenbein, smart grid project leader at IBM Research–Zurich.

“If you leave the window open a lot to let your cat in and out, then your parameters will be different from somebody who keeps the windows closed,” he says. From the thermal model, he adds, “we can determine the electrical flexibility of this house—we have a planned strategy on how to throttle the heat pump up or down. The goal is that the owners do not see any reduction in their quality of life.” About 100 businesses on Bornholm are being similarly equipped.

05wdenmarkgrid06
Photo: Nicky Bonne
Showing the Way: A demonstration house on Borholm is equipped with rooftop photovoltaics and smart-grid devices to let people see how these technologies work.

Another 500 or so households are being treated as a single electricity- consuming unit; Siemens’s Denmark subsidiary is coordinating that part of the smart grid. The remainder of the 1900 households enrolled in the project—about a tenth of the island—are just getting smart meters, which provide them with fine-grained information about their electricity consumption and market prices but don’t control their usage in any way.

Interestingly, EcoGrid participants aren’t being told to expect a drop in their electricity bills. That’s partly a way to manage expectations, but it’s also just being realistic: Numerous studies in Denmark and other countries have shown that the incremental savings people get from being more energy efficient usually aren’t enough to change their behavior. That said, Gantenbein notes, there’s been no lack of volunteers on Bornholm.

“Danes take preservation of the environment close to their hearts,” he says. “It’s like a sport. They heat carefully, they close doors, they use different technologies, and by being engaged, they are very enthusiastic to participate in such an ambitious pilot.”

05wdenmarkgrid07
Photo: Nicky Bonne

Environmental Enthusiast: Martin Kok-Hansen [left] was one of the first homeowners to sign up for the EcoGrid smart grid. He’s already decided to swap his halogen lights for more efficient bulbs.

Martin Kok-Hansen is just such an enthusiast. He and his family live in a one-story brick house on the northern edge of Rønne, and he was among the first on Bornholm to sign up for the smart grid. The real estate agent says he decided to participate for the same reason he traded in his Jeep Grand Cherokee for a Volkswagen Golf a few years back. “In the future, we won’t have that much power,” he says. “And my son is probably going to have kids as well. Where are they going to get all the power from?”

There’s now a Landis+Gyr smart meter on the wall of Kok-Hansen’s garage, a small relay and reader in the laundry room that turns the electric heater on and off, and a digital thermostat in the living room; all three of these units communicate wirelessly with a “gateway” controller and router that in turn connect via the Internet to the utility company. The gateway and most of the other hardware, as well as the household communication and end-user Web services, were designed by a company called GreenWave Reality, based in Irvine, Calif.

Like other participants, Kok-Hansen can set limits on how warm or cool his house gets. “If it’s 21 °C in here and they need the power, they can switch off the heat and let it fall to 18 °C,” he says. That’s two or three degrees cooler than normal, but he thinks he can cope. “Maybe you put on a sweater for a while.”

05wdenmarkgrid08
Photo: Nicky Bonne

Smart Beer: Bornholm has about 200 experimental bottle coolers outfitted with special controllers so that they can respond to changes in grid frequency by automatically turning off or on.

Standing in his recently remodeled kitchen, laptop perched on the black granite countertop, he logs into his account on the Østkraft website. He can see, in near real time, how much electricity he’s using. It’s been illuminating, to say the least.

“Right now I’m using 1200 watts,” he says, pointing to a graph onscreen. “But when you turn this one on”—he walks over to a wall switch and flicks on the recessed halo gen lights overhead—“you see that the usage goes way up.” Sure enough, within a few seconds, the graphed value nearly doubles. That’s because each halogen bulb is 50 watts, and the kitchen has 16 of them. At current rates, 1 kilowatt-hour runs about 2 Danish kroner, or 35 cents. So keeping those lights on just 4 hours a day is costing him $500 a year, he figures. He plans to swap them out soon for compact fluorescents or LEDs.

“I definitely will change those,” he says. “This is a whole new lifestyle.”

The SuperBest supermarket just off the main square in Rønne is packed on a Saturday afternoon. A young man stops at a refrigerator, pulls out a few bottles of beer, and puts them in his cart. He doesn’t bother to read the explanatory sticker plastered across the refrigerator’s glass front, nor does he glance up at the shoebox-size device sitting atop the cooler. And so he may have no inkling that this refrigerator, and about 200 other units like it on Bornholm, is special: Like the EcoGrid’s heat pumps, the bottle coolers are helping to balance the grid [PDF].

Two years ago, researchers at DTU modified each cooler so that it directly monitors grid frequency, explains Østergaard. In a series of experiments, his group has shown that the coolers can be programmed to turn themselves off when the frequency drops by more than a tenth, and then automatically turn back on when the frequency stabilizes. “If it’s just a small frequency variation, then you just have a small number of coolers respond,” he explains. “But if there’s a large variation, then all of them will react.”

The concept of using coolers, pumps, and other appliances in this way has been kicking around for a while, Østergaard says, but only in the last decade or so has it become economically feasible. “These days, every cooler has a thermostat with a microcontroller and processor, so you can just program it to do this,” he notes. Whereas the heating systems hooked up to the EcoGrid are reacting to market prices, which are an indirect measure of power supply and demand, the Bornholm bottle coolers are detecting conditions on the grid itself.

Østergaard says both approaches are useful: “It’s important to balance the grid on all time scales, from seconds and minutes to days and years.” And by using information technology to strategically roll back demand, rather than ramping up supply, the smart grid can create a more efficient network. “Moving bits and bytes is less expensive than moving amperes,” he says.

As to whether Denmark and the rest of Europe will meet their lofty energy goals, Østergaard’s not saying. “It’s good to have goals,” he allows. “I don’t know if we will succeed. But without projects like this, there is no chance at all.”

Source IEEE Spectrum May 2013

.

What is Google Glass?

Google Glass is an attempt to free data from desktop computers and portable devices like phones and tablets, and place it right in front of your eyes.

Essentially, Google Glass is a camera, display, touchpad, battery and microphone built into spectacle frames so that you can perch a display in your field of vision, film, take pictures, search and translate on the go.

The principle is one that has been around for years in science fiction, and more recently it’s become a slightly clunky reality. In fact, the “heads-up display” putting data in your field of vision became a reality as early as 1900 when the reflector sight was invented.

 

Google Glass: what you need to know
Google Glass options

 

Google Glass uses display technology instead to put data in front (or at least, to the upper right) of your vision courtesy of a prism screen. This is designed to be easily seen without obstructing your view. According to Google the display is “the equivalent of a 25-inch high definition screen from eight feet away”. There’s no official word on native resolution, but 640 x 360 has been widely mooted.

Overlaying data into your vision has obvious benefits; many of which are already functional in Google Glass. Directions become more intuitive (although it sounds like there is no GPS on board so you will have to pair it with your phone), you can view real-time translations or transcriptions of what is being said, and you can scroll through and reply to messages – all on the fly.

 

Google Glass: what you need to know
Google Glass – certainly capturing plenty of attention

 

The embedded camera obviously does not need a viewfinder because it is simply recording your first-person perspective, allowing you to take snaps or footage of what you are actually seeing.

Any function that requires you to look at a screen could be put in front of you.

Controlling this data is the next neat trick. With a microphone and touchpad on one arm of the frame, you can select what you want to do with a brief gesture or by talking to the device, and Google Glass will interpret your commands.

Google Glass can also provide sound, with bone-induction technology confirmed. This vibrates your skull to create sound, which is both more grisly sounding and much less cumbersome than traditional headphones.

What can Google Glass do?

As well as Google’s own list of features, the early apps for Google Glass provide a neat glimpse into the potential of the headset.

As well as photos and film – which require no explanation – you can use the Google hangout software to video conference with your friends and show them what you’re looking at.

You’ll also be able to use Google Maps to get directions, although with GPS absent from the spec list, you’ll need to tether Glass to your phone.

To do that, Google offers the MyGlass app. This pairs your headset with an Android phone. As well as sharing GPS data, this means messages can be received, viewed on the display, and answered using the microphone and Google’s voice-to-text functionality.

Google has given its Glass project a big boost by snapping up voice specialists DNNresearch.

That functionality will also bring the ability to translate the words being spoken to you into your own language on the display. Obviously you’ll need a WiFi connection or a hefty data plan if you’re in another country, but it’s certainly a neat trick if it works.

Third parties are also already developing some rather cool/scary apps for Google Glass – including one that allows you to identify your friends in a crowd, and another that allows you to dictate an email.

The New York Times app gives an idea how news will be displayed when it’s asked for: a headline, byline, appropriate image and number of hours since the article was published are displayed.

 

Google Glass: what you need to know
Google Glass – another reason not to miss your flight

 

Other cool ideas include a air carrier’s suggestion that you could haveflight flight details beamed to you while you are waiting at the airport. Basically, the sky’s the limit..

Swarm Robotics: A Developing Field

In our nature, we get to see a lot of wonderful team work, even among very poorly evolved species. Examples include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling. Here, instead of individual intelligence, these creatures show a collective behaviour known as swarm behaviour and thus possess what can be called swarm intelligence.  Key features of such behaviour are that each individual follow simple rules and there is no centralised control structure dictating their behaviour but the seemingly random and localised interaction between such individuals lead to the emergence of “intelligent” global behaviour.  Such behaviour has great scope in the field of robotics.

 

The artificial simulation of swarm behaviour can be traced back to 1986, when an artificial life and computer graphics expert Craig Reynolds developed a program called Boids. It could simulate the flocking behaviour of birds. Simple rules like separation, alignment and cohesion were able to simulate emergent behaviour where complex systems and patterns arise out of a multiplicity of relatively simple interactions. This program has seen many interesting uses such as to create realistic-looking representations of flocks of birds and other creatures. Examples include flying bird-like creatures in the famous video-game Half-Life and swarm of bats in the Batman Returns feature film.

 

In a swarm, each individual element is called an agent and will be an autonomous entity capable of observing and acting upon an environment and directing its activity towards achieving goals. Swarm robotics is a new approach to the coordination of multi-robot systems where a desired collective behaviour emerges from the interactions between the robots and interactions of robots with the environment. Generally swarm robotics differ from distributed robotic systems in the aspect that it emphasizes large number of robots, and promotes scalability. Scalability is achieved by using localised communication via wireless transmission systems, like radio frequency or infrared.  A swarm-intelligent approach tries to achieve meaningful behaviour at swarm-level, instead of the individual level.

 

One of the most interesting advancements in the field of swarm robotics was by the project team headed by Dr. Marco Dorigo with their work “Swarm-bots: Swarms of self-assembling artifacts”. It was aimed to study new approaches to the design and implementation of self-organizing and self-assembling artifacts. They designed small, autonomous mobile robots called s-bots which are capable of performing basic tasks such as autonomous navigation, perception of its surrounding environment, and grasping of objects. A self-assembling and self-organising robot colony made of a number (30-35) of s-bots is called a swarm-bot and is capable of performing exploration, navigation and transportation of heavy objects on very rough terrains, especially when a single s-bot has difficulty in achieving the task alone. This project was limited to a 2-dimensional terrain. It lasted 42 months, was successfully completed on March 31, 2005.

 

The Swarmanoid project is extending the work done in the Swarm-bots project to three dimensional environments. This system is made up of heterogeneous, dynamically connected, small autonomous robots, collectively called a swarmanoid. It comprises of three robot types: eye-bots, hand-bots, and foot-bots. The eye-bots are able to fly and attach to the ceiling and to sense and analyse the environment from a high position. The hand-bots can climb vertical surfaces of walls or other objects. Foot-bots are specialised in moving on rough terrain and transporting either objects or other robots (based on s-bots). Together these 3 types of agents form a heterogeneous system capable of moving in 3-dimensional space. This project completed successfully on September 30, 2010. A very impressive video titled “Swarmanoid, the movie” describing the project and demonstrating an example where these robots together steal a book from a shelf is available in the Internet. It also won the AAAI 2011 video competition.

 

The potential application of swarm robotics are in semi-automatic space exploration, search for rescue, underwater exploration etc. These can also be used to perform tasks that demand for miniaturization (nanorobotics, microbotics), like distributed sensing tasks in micromachinery or the human body, tasks that demand cheap designs (mining tasks or agricultural foraging tasks) etc. A study of the artificial swarm behaviour also helps us obtain a better understand biological swarming (bird and insect migration, bee and ant colonies, fish shoaling and schooling etc). Thus the possibilities offered by this developing field of robotics are immense.

 [highlight color=”yellow”]Rahul.R (CET)[/highlight].