Tuesday, January 24, 2017

Google is giving $20m to the first team to land a spacecraft on the Moon in 2017

Google's nearly decade-long quest to get private space explorers to land a robotic spacecraft on the Moon is finally coming down to the wire – the five remaining teams have until the end of 2017 to meet the epic challenge.
The Lunar XPRIZE competition, first announced back in 2007, may amount to the most ambitious 'contest' Earth has ever seen – not to mention the Moon – and now the race is on, as the remaining entrants scramble for the chance to make space history.
For the 16 teams that made it through up to this point, it was mandatory to secure a launch contract with a rocket services company by 31 December 2016 – a condition that the five finalists all satisfied.For the finalists with launch contracts in place, they must now take off by 31 December 2017.
Once their robotic, uncrewed spacecraft have been launched into space, the teams have to remotely land them on the Moon, where they will travel at least 500 metres (1,640 ft) across the lunar surface, and then transmit images and high-definition video back to Earth.
Not exactly a cakewalk, but the first team to rise to the challenge will take home a cool $20 million for their achievement.
Other than rewarding incredible ingenuity, one of the main aims of the competition is to raise awareness of space travel and put the scientific spotlight back on the Moon, which could be the ideal stepping stone for future space exploration efforts in the coming decades.
"Each of these teams has pushed the boundaries to demonstrate that you don't have to be a government superpower to send a mission to the Moon," Gonzales-Mowrer said in a press release, "while inspiring audiences to pursue the fields of science, technology, engineering, and mathematics."
So who's who among the last remaining competitors?
SpaceIL
From Tel Aviv, Israel, the non-profit SpaceIL has designed a 'hopper' craft that will land on the lunar surface, then launch off and fly 500 metres, before touching down again.
Moon Express
This US startup hails from Cape Canaveral, Florida, and calls the Moon the 'eighth continent'. They're also trialling a hopper lander.
Synergy Moon
An international finalist, which will be launching its own craft on a NEPTUNE 8 rocket from a site in California.
Team Indus
This Indian finalist will deploy what looks to officially be the world's cutest-looking rover from its lander. Good luck!
Hakuto
Hitching a ride with Team Indus's Moon lander, this Japanese team has a dual-rover system.
The larger, four-wheeled 'Moonraker' is tethered to the smaller two-wheeled 'Tetris', and can lower the smaller rover to explore holes in the lunar surface.
All told, this is shaping up to be one heck of a space race, and it feels so much more real now that we're finally on the home stretch.
We can't wait to see how this awesome contest plays out from here.

Electrons in Graphene Behave Like Light, Only Better

From Controlled Environments

A team led by Cory Dean, assistant professor of physics at Columbia University, Avik Ghosh, professor of electrical and computer engineering at the University of Virginia, and James Hone, Wang Fong-Jen Professor of Mechanical Engineering at Columbia Engineering, has directly observed — for the first time — negative refraction for electrons passing across a boundary between two regions in a conducting material. First predicted in 2007, this effect has been difficult to confirm experimentally. The researchers were able to observe the effect in graphene, demonstrating that electrons in the atomically thin material behave like light rays, which can be manipulated by such optical devices as lenses and prisms. The findings, which are published in the September 30 edition of Science, could lead to the development of new types of electron switches, based on the principles of optics rather than electronics.
“The ability to manipulate electrons in a conducting material like light rays opens up entirely new ways of thinking about electronics,” says Dean. “For example, the switches that make up computer chips operate by turning the entire device on or off, and this consumes significant power. Using lensing to steer an electron ‘beam’ between electrodes could be dramatically more efficient, solving one of the critical bottlenecks to achieving faster and more energy efficient electronics.”
Dean adds, “These findings could also enable new experimental probes. For example, electron lensing could enable on-chip versions of an electron microscope, with the ability to perform atomic scale imaging and diagnostics. Other components inspired by optics, such as beam splitters and interferometers, could additionally enable new studies of the quantum nature of electrons in the solid state.”
While graphene has been widely explored for supporting high electron speed, it is notoriously hard to turn off the electrons without hurting their mobility. Ghosh says, “The natural follow-up is to see if we can achieve a strong current turn-off in graphene with multiple angled junctions. If that works to our satisfaction, we’ll have on our hands a low-power, ultra-high-speed switching device for both analog (RF) and digital (CMOS) electronics, potentially mitigating many of the challenges we face with the high energy cost and thermal budget of present day electronics.”
Light changes direction — or refracts — when passing from one material to another, a process that allows us to use lenses and prisms to focus and steer light. A quantity known as the index of refraction determines the degree of bending at the boundary, and is positive for conventional materials such as glass. However, through clever engineering, it is also possible to create optical “metamaterials” with a negative index, in which the angle of refraction is also negative. “This can have unusual and dramatic consequences,” Hone notes. “Optical metamaterials are enabling exotic and important new technologies such as super lenses, which can focus beyond the diffraction limit, and optical cloaks, which make objects invisible by bending light around them.”
Electrons travelling through very pure conductors can travel in straight lines like light rays, enabling optics-like phenomena to emerge. In materials, the electron density plays a similar role to the index of refraction, and electrons refract when they pass from a region of one density to another. Moreover, current carriers in materials can either behave like they are negatively charged (electrons) or positively charged (holes), depending on whether they inhabit the conduction or the valence band. In fact, boundaries between hole-type and electron-type conductors, known as p-n junctions (“p” positive, “n” negative), form the building blocks of electrical devices such as diodes and transistors.
“Unlike in optical materials,” says Hone, “where creating a negative index metamaterial is a significant engineering challenge, negative electron refraction occurs naturally in solid state materials at any p-n junction.”
The development of two-dimensional conducting layers in high-purity semiconductors such as GaAs (Gallium arsenide) in the 1980s and 1990s allowed researchers to first demonstrate electron optics including the effects of both refraction and lensing. However, in these materials, electrons travel without scattering only at very low temperatures, limiting technological applications. Furthermore, the presence of an energy gap between the conduction and valence band scatters electrons at interfaces and prevents observation of negative refraction in semiconductor p-n junctions. In this study, the researchers’ use of graphene, a 2D material with unsurpassed performance at room temperature and no energy gap, overcame both of these limitations.
The possibility of negative refraction at graphene p-n junctions was first proposed in 2007 by theorists working at both the University of Lancaster and Columbia University. However, observation of this effect requires extremely clean devices, such that the electrons can travel ballistically, without scattering, over long distances.

Thursday, January 12, 2017

Superconductivity Gets a New Spin

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have made a discovery that could lay the foundation for quantum superconducting devices. Their breakthrough solves one the main challenges to quantum computing: how to transmit spin information through superconducting materials.
Every electronic device — from a supercomputer to a dishwasher — works by controlling the flow of charged electrons. But electrons can carry so much more information than just charge; electrons also spin, like a gyroscope on axis.
Harnessing electron spin is really exciting for quantum information processing because not only can an electron spin up or down — one or zero — but it can also spin any direction between the two poles. Because it follows the rules of quantum mechanics, an electron can occupy all of those positions at once.  Imagine the power of a computer that could calculate all of those positions simultaneously.
A whole field of applied physics, called spintronics, focuses on how to harness and measure electron spin and build spin equivalents of electronic gates and circuits.
By using superconducting materials through which electrons can move without any loss of energy, physicists hope to build quantum devices that would require significantly less power.
But there’s a problem.
According to a fundamental property of superconductivity, superconductors can’t transmit spin. Any electron pairs that pass through a superconductor will have the combined spin of zero.
In work published recently in Nature Physics, the Harvard researchers found a way to transmit spin information through superconducting materials.
“We now have a way to control the spin of the transmitted electrons in simple superconducting devices,” says Amir Yacoby, Professor of Physics and of Applied Physics at SEAS and senior author of the paper.
Harvard researchers found a way to transmit electron spin information through superconducting materials. Image: Courtesy of WikiCommons
It’s easy to think of superconductors as particle super highways but a better analogy would be a super carpool lane as only paired electrons can move through a superconductor without resistance.
These pairs are called Cooper Pairs and they interact in a very particular way. If the way they move in relation to each other (physicists call this momentum) is symmetric, then the pair’s spin has to be asymmetric — for example, one negative and one positive for a combined spin of zero. When they travel through a conventional superconductor, Cooper Pairs’ momentum has to be zero and their orbit perfectly symmetrical.
But if you can change the momentum to asymmetric — leaning toward one direction — then the spin can be symmetric. To do that, you need the help of some exotic (aka weird) physics.
Superconducting materials can imbue non-superconducting materials with their conductive powers simply by being in close proximity. Using this principle, the researchers built a superconducting sandwich, with superconductors on the outside and mercury telluride in the middle. The atoms in mercury telluride are so heavy and the electrons move so quickly, that the rules of relativity start to apply.
“Because the atoms are so heavy, you have electrons that occupy high-speed orbits,” says Hechen Ren, coauthor of the study and graduate student at SEAS. “When an electron is moving this fast, its electric field turns into a magnetic field which then couples with the spin of the electron. This magnetic field acts on the spin and gives one spin a higher energy than another.”
So, when the Cooper Pairs hit this material, their spin begins to rotate.
“The Cooper Pairs jump into the mercury telluride and they see this strong spin orbit effect and start to couple differently,” says Ren. “The homogenous breed of zero momentum and zero combined spin is still there but now there is also a breed of pairs that gains momentum, breaking the symmetry of the orbit. The most important part of that is that the spin is now free to be something other than zero.”
The team could measure the spin at various points as the electron waves moved through the material. By using an external magnet, the researchers could tune the total spin of the pairs.
“This discovery opens up new possibilities for storing quantum information. Using the underlying physics behind this discovery provides also new possibilities for exploring the underlying nature of superconductivity in novel quantum materials,” says Yacoby.
This research was coauthored by Sean Hart, Michael Kosowsky, Gilad Ben-Shach, Philipp Leubner, Christoph BrĂ¼ne, Hartmut Buhmann, Laurens W. Molenkamp, and Bertrand I. Halperin.
Source: Harvard University

Superconductor Sets New Guinness World Record

Harnessing the equivalent of three tons of force inside a golf ball-sized sample of material that is normally as brittle as fine china, the team beat a record that had stood for more than a decade and the record has now been officially recognized by the Guinness World Records.
The Guinness World Records website says, "A world record for a trapped field in a superconductor, was achieved in 2014 by a team of engineers led by Professor David Cardwell. The strongest magnetic field trapped in a superconductor is 17.6 tesla, achieved by researchers from the University of Cambridge, the National High Magnetic Field Laboratory, and the Boeing Company, as published in Superconductor Science and Technology on June 25, 2014.
"The team used gadolinium boron carbon oxide (GdBCO) which is typically very brittle, then doped the structure with silver, and 'shrink wrapped' steel around the thumb-sized object to increase its strength. Superconductors which trap strong magnetic fields have a wide variety of applications, from Maglev trains to electricity storage."
17.6 Tesla is roughly 100 times stronger than the field generated by a typical fridge magnet — beating the previous record by 0.4 Tesla.
The research demonstrates the potential of high-temperature superconductors for applications in a range of fields, including flywheels for energy storage, “magnetic separators” which can be used in mineral refinement and pollution control, and in high-speed levitating monorail trains.

Atomic Sandwiches' Could Make Computers 100x Greener

From Controlled Environments

Researchers have engineered a material that could lead to a new generation of computing devices, packing in more computing power while consuming a fraction of the energy that today's electronics require.
Known as a magnetoelectric multiferroic material, it combines electrical and magnetic properties at room temperature and relies on a phenomenon called "planar rumpling."
The new material sandwiches together individual layers of atoms, producing a thin film with magnetic polarity that can be flipped from positive to negative or vice versa with small pulses of electricity. In the future, device-makers could use this property to store digital 0's and 1's, the binary backbone that underpins computing devices.
"Before this work, there was only one other room-temperature multiferroic whose magnetic properties could be controlled by electricity," said John Heron, assistant professor in the Department of Materials Science and Engineering at the University of Michigan, who worked on the material with researchers at Cornell University. "That electrical control is what excites electronics makers, so this is a huge step forward."
Room-temperature multiferroics are a hotly pursued goal in the electronics field because they require much less power to read and write data than today's semiconductor-based devices. In addition, their data doesn't vanish when the power is shut off. Those properties could enable devices that require only brief pulses of electricity instead of the constant stream that's needed for current electronics, using an estimated 100 times less energy.
"Electronics are the fastest-growing consumer of energy worldwide," said Ramamoorthy Ramesh, associate laboratory director for energy technologies at Lawrence Berkeley National Laboratory. "Today, about 5 percent of our total global energy consumption is spent on electronics, and that's projected to grow to 40-50 percent by 2030 if we continue at the current pace and if there are no major advances in the field that lead to lower energy consumption."
To create the new material, the researchers started with thin, atomically precise films of hexagonal lutetium iron oxide (LuFeO3), a material known to be a robust ferroelectric, but not strongly magnetic. Lutetium iron oxide consists of alternating monolayers of lutetium oxide and iron oxide. They then used a technique called molecular-beam epitaxy to add one extra monolayer of iron oxide to every 10 atomic repeats of the single-single monolayer pattern.
"We were essentially spray painting individual atoms of iron, lutetium and oxygen to achieve a new atomic structure that exhibits stronger magnetic properties," said Darrell Schlom, a materials science and engineering professor at Cornell and senior author of a study on the work recently published in Nature.
The result was a new material that combines a phenomenon in lutetium oxide called "planar rumpling" with the magnetic properties of iron oxide to achieve multiferroic properties at room temperature.
Heron explains that the lutetium exhibits atomic-level displacements called rumples. Visible under an electron microscope, the rumples enhance the magnetism in the material, allowing it to persist at room temperature. The rumples can be moved by applying an electric field, and are enough to nudge the magnetic field in the neighboring layer of iron oxide from positive to negative or vice versa, creating a material whose magnetic properties can be controlled with electricity--a "magnetoelectric multiferroic."
While Heron believes a viable multiferroic device is likely several years off, the work puts the field closer to its goal of devices that continue the computing industry's speed improvements while consuming less power. This is essential if the electronics industry is to continue to advance according to Moore's law, which predicts that the power of integrated circuits will double every year. This has proven true since the 1960s, but experts predict that current silicon-based technology may be approaching its limits.

Glow-in-the-dark Dye Could Fuel Liquid-based Batteries

From Controlled Environments


Could a glow-in-the-dark dye be the next advancement in energy storage technology?
Scientists at the University at Buffalo think so.
They have identified a fluorescent dye called BODIPY as an ideal material for stockpiling energy in rechargeable, liquid-based batteries that could one day power cars and homes.
BODIPY—short for boron-dipyrromethene—shines brightly in the dark under a black light.
But the traits that facilitate energy storage are less visible. According to new research, the dye has unusual chemical properties that enable it to excel at two key tasks: storing electrons and participating in electron transfer. Batteries must perform these functions to save and deliver energy, and BODIPY is very good at them.
In experiments, a BODIPY-based test battery operated efficiently and with longevity, running well after researchers drained and recharged it 100 times.
"As the world becomes more reliant on alternative energy sources, one of the huge questions we have is, 'How do we store energy?' What happens when the sun goes down at night, or when the wind stops?" says lead researcher Timothy Cook, PhD, an assistant professor of chemistry in the University at Buffalo College of Arts and Sciences. "All these energy sources are intermittent, so we need batteries that can store enough energy to power the average house."
The research was published on Nov. 16 in ChemSusChem, an academic journal devoted to topics at the intersection of chemistry and sustainability.
BODIPY is a promising material for a liquid-based battery called a "redox flow battery."
These fluid-filled power cells present several advantages over those made from conventional materials.
Lithium-ion batteries, for example, are risky in that they can catch fire if they break open, Cook says. The dye-based batteries would not have this problem; if they ruptured, they would simply leak, he says.
Redox flow batteries can also be easily enlarged to store more energy—enough to allow a homeowner to power a solar house overnight, for instance, or to enable a utility company to stockpile wind energy for peak usage times. This matters because scaling up has been a challenge for many other proposed battery technologies.
Redox flow batteries consist of two tanks of fluids separated by various barriers.
When the battery is being used, electrons are harvested from one tank and moved to the other, generating an electric current that—in theory—could power devices as small as a flashlight or as big as a house. To recharge the battery, you would use a solar, wind or other energy source to force the electrons back into the original tank, where they would be available to do their job again.
A redox flow battery's effectiveness depends on the chemical properties of the fluids in each tank.
"The library of molecules used in redox flow batteries is currently small but is expected to grow significantly in coming years," Cook says. "Our research identifies BODIPY dye as a promising candidate."
In experiments, Cook's team filled both tanks of a redox flow battery with the same solution: a powdered BODIPY dye called PM 567 dissolved in liquid.
Within this cocktail, the BODIPY compounds displayed a notable quality: They were able to give up and receive an electron without degrading as many other chemicals do. This trait enabled the dye to store electrons and facilitate their transfer between the battery's two ends during repeated cycles—100—of charging and draining.
Based on the experiments, scientists also predict that BODIPY batteries would be powerful enough to be useful to society, generating an estimated 2.3 volts of electricity.
The study focused on PM 567, different varieties of BODIPY share chemical properties, so it's likely that other BOPIDY dyes would also make good energy storage candidates, Cook says.

New Electrical Energy Storage Material Shows its Power

From Controlled Environments
A powerful new material developed by Northwestern University chemist William Dichtel and his research team could one day speed up the charging process of electric cars and help increase their driving range.
An electric car currently relies on a complex interplay of both batteries and supercapacitors to provide the energy it needs to go places, but that could change.
“Our material combines the best of both worlds -- the ability to store large amounts of electrical energy or charge, like a battery, and the ability to charge and discharge rapidly, like a supercapacitor,” said Dichtel, a pioneer in the young research field of covalent organic frameworks (COFs).
Dichtel and his research team have combined a COF -- a strong, stiff polymer with an abundance of tiny pores suitable for storing energy -- with a very conductive material to create the first modified redox-active COF that closes the gap with other older porous carbon-based electrodes.
“COFs are beautiful structures with a lot of promise, but their conductivity is limited,” Dichtel said. “That’s the problem we are addressing here. By modifying them -- by adding the attribute they lack -- we can start to use COFs in a practical way.”
And modified COFs are commercially attractive: COFs are made of inexpensive, readily available materials, while carbon-based materials are expensive to process and mass-produce.
Dichtel, the Robert L. Letsinger Professor of Chemistry at the Weinberg College of Arts and Sciences, presented his team’s findings August 24 at the American Chemical Society (ACS) National Meeting in Philadelphia. Also, a paper by Dichtel and co-authors from Northwestern and Cornell University was published this week by the journal ACS Central Science.
To demonstrate the new material’s capabilities, the researchers built a coin-cell battery prototype device capable of powering a light-emitting diode for 30 seconds.
The material has outstanding stability, capable of 10,000 charge/discharge cycles, the researchers report. They also performed extensive additional experiments to understand how the COF and the conducting polymer, called poly(3,4-ethylenedioxythiophene) or PEDOT, work together to store electrical energy.
Dichtel and his team made the material on an electrode surface. Two organic molecules self-assembled and condensed into a honeycomb-like grid, one 2-D layer stacked on top of the other. Into the grid’s holes, or pores, the researchers deposited the conducting polymer.
Each pore is only 2.3 nanometers wide, but the COF is full of these useful pores, creating a lot of surface area in a very small space. A small amount of the fluffy COF powder, just enough to fill a shot glass and weighing the same as a dollar bill, has the surface area of an Olympic swimming pool.
The modified COF showed a dramatic improvement in its ability to both store energy and to rapidly charge and discharge the device. The material can store roughly 10 times more electrical energy than the unmodified COF, and it can get the electrical charge in and out of the device 10 to 15 times faster.
“It was pretty amazing to see this performance gain,” Dichtel said. “This research will guide us as we investigate other modified COFs and work to find the best materials for creating new electrical energy storage devices.”
The National Science Foundation (grant DGE-1144153), the Camille and Henry Dreyfus Foundation and the U.S. Army Research Office (Multidisciplinary University Research Initiatives grant W911NF-15-1-0447) supported the research.
The research was conducted at Cornell University, where Dichtel was a faculty member until this summer, when he moved to Northwestern.
The paper is titled “Superior Charge Storage and Power Density of a Conducting Polymer-Modified Covalent Organic Framework.” In addition to Dichtel, other authors are Ryan P. Bisbey, of Northwestern; Catherine R. Mulzer (nĂ©e DeBlase, first author), currently at Dow Electronic Materials; and Luxi Shen, James R. McKone, Na Zhang and HĂ©ctor D. Abruña, of Cornell.

Researchers Uncover Astonishing Behavior of Water

From Controlled Environments
It’s a well-known fact that water, at sea level, starts to boil at a temperature of 212 degrees Fahrenheit, or 100 degrees Celsius. And scientists have long observed that when water is confined in very small spaces, its boiling and freezing points can change a bit, usually dropping by around 10 C or so.
But now, a team at MIT has found a completely unexpected set of changes: Inside the tiniest of spaces — in carbon nanotubes whose inner dimensions are not much bigger than a few water molecules — water can freeze solid even at high temperatures that would normally set it boiling.
The discovery illustrates how even very familiar materials can drastically change their behavior when trapped inside structures measured in nanometers, or billionths of a meter. And the finding might lead to new applications — such as, essentially, ice-filled wires — that take advantage of the unique electrical and thermal properties of ice while remaining stable at room temperature.
The results are being reported in the journal Nature Nanotechnology, in a paper by Michael Strano, the Carbon P. Dubbs Professor in Chemical Engineering at MIT; postdoc Kumar Agrawal; and three others.
“If you confine a fluid to a nanocavity, you can actually distort its phase behavior,” Strano says, referring to how and when the substance changes between solid, liquid, and gas phases. Such effects were expected, but the enormous magnitude of the change, and its direction (raising rather than lowering the freezing point), were a complete surprise: In one of the team’s tests, the water solidified at a temperature of 105 C or more. (The exact temperature is hard to determine, but 105 C was considered the minimum value in this test; the actual temperature could have been as high as 151 C.)
“The effect is much greater than anyone had anticipated,” Strano says.
It turns out that the way water’s behavior changes inside the tiny carbon nanotubes — structures the shape of a soda straw, made entirely of carbon atoms but only a few nanometers in diameter — depends crucially on the exact diameter of the tubes. “These are really the smallest pipes you could think of,” Strano says. In the experiments, the nanotubes were left open at both ends, with reservoirs of water at each opening.
A team at MIT has found an unexpected discovery about water: Inside the tiniest of spaces — in carbon nanotubes whose inner dimensions are not much bigger than a few water molecules — water can freeze solid even at high temperatures that would normally set it boiling. The finding might lead to new applications such as ice-filled wires. Image: Courtesy of the researchers
Even the difference between nanotubes 1.05 nanometers and 1.06 nanometers across made a difference of tens of degrees in the apparent freezing point, the researchers found. Such extreme differences were completely unexpected. “All bets are off when you get really small,” Strano says. “It’s really an unexplored space.”
In earlier efforts to understand how water and other fluids would behave when confined to such small spaces, “there were some simulations that showed really contradictory results,” he says. Part of the reason for that is many teams weren’t able to measure the exact sizes of their carbon nanotubes so precisely, not realizing that such small differences could produce such different outcomes.
In fact, it’s surprising that water even enters into these tiny tubes in the first place, Strano says: Carbon nanotubes are thought to be hydrophobic, or water-repelling, so water molecules should have a hard time getting inside. The fact that they do gain entry remains a bit of a mystery, he says.
Strano and his team used highly sensitive imaging systems, using a technique called vibrational spectroscopy, that could track the movement of water inside the nanotubes, thus making its behavior subject to detailed measurement for the first time.
The team can detect not only the presence of water in the tube, but also its phase, he says: “We can tell if it’s vapor or liquid, and we can tell if it’s in a stiff phase.” While the water definitely goes into a solid phase, the team avoids calling it “ice” because that term implies a certain kind of crystalline structure, which they haven’t yet been able to show conclusively exists in these confined spaces. “It’s not necessarily ice, but it’s an ice-like phase,” Strano says.
Because this solid water doesn’t melt until well above the normal boiling point of water, it should remain perfectly stable indefinitely under room-temperature conditions. That makes it potentially a useful material for a variety of possible applications, he says. For example, it should be possible to make “ice wires” that would be among the best carriers known for protons, because water conducts protons at least 10 times more readily than typical conductive materials. “This gives us very stable water wires, at room temperature,” he says.
The research team also included MIT graduate students Steven Shimizu and Lee Drahushuk, and undergraduate Daniel Kilcoyne. The work was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office through the MIT Institute for Soldier Nanotechnologies, and Shell-MIT Energy Initiative Energy Research Fund.
Source: MIT

Nanoshells Explore How Life May Have Started

From Controlled Environments
A way to coax simple, inorganic nanoparticles to spontaneously assemble into shells has been discovered, potentially paving the way for more efficient industrial chemical processing, gene delivery, and clean-up of chemical contaminants in the environment, researchers say.
And it explores how life may have started.
“This work brings up the deeper questions linking self-assembly and the origin of life,” says Nicholas Kotov, the Joseph B. and Florence V. Cejka Professor of Chemical Engineering at the University of Michigan. “Cells, viruses — all life relies on compartmentalization.”
Kotov’s team demonstrated the self-assembly of simple nanoparticles into spherical shells about 20 to 50 nanometers across, or about half the diameter of a virus. The nanoparticles are made of cadmium sulfide, a semiconducting material that can be used to make solar cells. To clarify how the self-assembly occurred, Petr KrĂ¡l, a professor of chemistry at the University of Illinois, Chicago, and colleagues made detailed simulations of the self-assembled nanoparticle shells, down to the level of individual atoms.
This high-fidelity modeling takes months to simulate less than a millionth of a second, so the Chicago team could not show the entire one-second process of the shells assembling. Still, they could show that once a shell had formed, the forces on the particles kept it together.
The individual cadmium sulfide particles, roughly shaped like four-sided pyramids, have a negative charge. This causes them to repel one another. But in close quarters, this repulsion is overcome by an attraction between the surfaces: The electrons on each particle run away from one another, creating positively and negatively charged regions on the atoms that are aligned so that the particles attract one another.
But when many particles come together, the repulsion from the overall negative charge becomes strong enough that despite the close-range attraction, they can’t form a solid sphere — particles on the inside get pushed out. Instead, they form shells. Peijun Zhang, a renowned expert in the shells of viruses and professor of structural biology at the University of Pittsburgh, and her group obtained precise three-dimensional images of the nanoshells with an electron microscope.
In order to cause the shells to form, the team needed to adjust only the pH, making the water moderately basic, which causes the negative charge on the nanoparticles.
“The nanoparticles formed compartments without careful chemical organization. There was no need for peptides, amino acids or any organic molecules,” says Kotov, explaining how the work connects to the origin of life. “If there are particles from rocks, and liquid for mobility, compartments can form.”
Nanoshells could be catalysts, creating shortcuts in industrial chemistry by cutting the energy required to produce useful chemicals or reducing the waste products. The self-assembly mechanism may enable the shells to self-repair if they are damaged by the reactions, a common problem for catalysts. As catalysts, they may also be useful for cleaning up chemical spills.
Kotov is particularly interested in their potential for gene delivery, mimicking the natural viral shells currently used. His lab is currently exploring the viability of the nanoshells as capsules for gene therapy, a treatment for cancers and other disorders. The genes must be protected until they reach their target site in the body.
“It is a beautiful system with a lot of interesting science. It is much simpler than the systems of compartments in living organisms, but we can modify it, learn how to control it, and eventually engineer it for applications,” KrĂ¡l says.
As for the origin of life, the nanoparticles leave holes in the shells large enough for small molecules to pass through. Kotov plans to explore whether the shells can catalyze reactions to build organic molecules.
Kotov is also a professor of chemical engineering, biomedical engineering, materials science and engineering, and macromolecular science and engineering. The study is to be published in Nature Chemistry, titled “Self-Assembly of Nanoparticles into Biomimetic Capsid-Like Nanoshells.”
The work was funded by the National Science Foundation of China (grant No. 21303032 and 21571041), the U.S. National Science Foundation (grant No. 1309765), the American Chemical Society (grant No. 53062-ND6). This research used resources of the National Energy Research Scientific Computing Center (NERSC), supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231, and the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation (grant No. OCI-1053575), and by the National Institutes of Health (grant No. GM085043).

Innovative Optical Material Offers Unparalleled Control of Light

From Controlled Environments
A team led by Nanfang Yu, assistant professor of applied physics at Columbia Engineering, has discovered a new phase-transition optical material and demonstrated novel devices that dynamically control light over a much broader wavelength range and with larger modulation amplitude than what has currently been possible.
The team, including researchers from Purdue, Harvard, Drexel, and Brookhaven National Laboratory, found that samarium nickelate (SmNiO3) can be electrically tuned continuously between a transparent and an opaque state over an unprecedented broad range of spectrum from the blue in the visible (wavelength of 400 nm) to the thermal radiation spectrum in the mid-infrared (wavelength of a few tens of micrometers).
The study, which is the first investigation of the optical properties of SmNiO3 and the first demonstration of the material in photonic device applications, is published online in Advanced Materials.
“The performance of SmNiO3 is record-breaking in terms of the magnitude and wavelength range of optical tuning,” Yu says. “There is hardly any other material that offers such a combination of properties that are highly desirable for optoelectronic devices. The reversible tuning between the transparent and opaque states is based on electron doping at room temperature, and potentially very fast, which opens up a wide range of exciting applications, such as ‘smart windows’ for dynamic and complete control of sunlight, variable thermal emissivity coatings for infrared camouflage and radiative temperature control, optical modulators, and optical memory devices.”
Some of the potential new functions include using SmNiO3‘s capability in controlling thermal radiation to build “intelligent” coatings for infrared camouflage and thermoregulation. These coatings could make people and vehicles, for example, appear much colder than they actually are and thus indiscernible under a thermal camera at night. The coating could help reduce the large temperature gradients on a satellite by adjusting the relative thermal radiation from its bright and dark side with respect to the sun and thereby prolong the lifetime of the satellite. Because this phase-transition material can potentially switch between the transparent and opaque states with high speed, it may be used in modulators for free-space optical communication and optical radar and in optical memory devices.
Researchers have long been trying to build active optical devices that can dynamically control light. These include Boeing 787 Dreamliner’s “smart windows,” which control (but not completely) the transmission of sunlight, rewritable DVD discs on which we can use a laser beam to write and erase data, and high-data-rate, long-distance fiber optic communications systems where information is “written” into light beams by optical modulators. Active optical devices are not more common in everyday life, however, because it has been so difficult to find advanced actively tunable optical materials, and to design proper device architectures that amplify the effects of such tunable materials.
When Shriram Ramanathan, associate professor of materials science at Harvard, discovered SmNiO3‘s giant tunable electric resistivity at room temperature, Yu took note. The two met at the IEEE Photonics Conference in 2013 and decided to collaborate. Yu and his students, working with Ramanathan, who is a co-author of this paper, conducted initial optical studies of the phase-transition material, integrated the material into nanostructured designer optical interfaces — “metasurfaces” — and created prototype active optoelectronic devices, including optical modulators that control a beam of light, and variable emissivity coatings that control the efficiency of thermal radiation.
“SmNiOis really an unusual material,” says Zhaoyi Li, the paper’s lead author and Yu’s PhD student, “because it becomes electrically more insulating and optically more transparent as it is doped with more electrons — this is just the opposite of common materials such as semiconductors.”
It turns out that doped electrons “lock” into pairs with the electrons initially in the material, a quantum mechanical phenomenon called “strong electron correlation,” and this effect makes these electrons unavailable to conduct electric current and absorbing light. So, after electron doping, SmNiO3 thin films that were originally opaque suddenly allow more than 70 percent of visible light and infrared radiation to transmit through.
“One of our biggest challenges,” Zhaoyi adds, “was to integrate SmNiO3 into optical devices. To address this challenge, we developed special nanofabrication techniques to pattern metasurface structures on SmNiO3 thin films. In addition, we carefully chose the device architecture and materials to ensure that the devices can sustain high temperature and pressure that are required in the fabrication process to activate SmNiO3.”
Yu and his collaborators plan next to run a systematic study to understand the basic science of the phase transition of SmNiO3 and to explore its technological applications. The team will investigate the intrinsic speed of phase transition and the number of phase-transition cycles the material can endure before it breaks down. They will also work on addressing technological problems, including synthesizing ultra-thin and smooth films of the material and developing nanofabrication techniques to integrate the material into novel flat optical devices.
“This work is one crucial step towards realizing the major goal of my research lab, which is to make an optical interface a functional optical device,” Yu notes. “We envision replacing bulky optical devices and components with ‘flat optics’ by utilizing strong interactions between light and two-dimensional structured materials to control light at will. The discovery of this phase-transition material and the successful integration of it into a flat device architecture are a major leap forward to realizing active flat optical devices not only with enhanced performance from the devices we are using today, but with completely new functionalities.”
Yu’s team included Ramanathan, his Harvard PhD student You Zhou, and his Purdue postdoctoral fellow Zhen Zhang, who synthesized the phase-transition material and did some of the phase transition experiments (this work began at Harvard and continued when Ramanathan moved to Purdue); Drexel University Materials Science Professor Christopher Li, PhD student Hao Qi, and research scientist Qiwei Pan, who helped make solid-state devices by integrating SmNiO3 with novel solid polymer electrolytes; and Brookhaven National Laboratory staff scientists Ming Lu and Aaron Stein, who helped device nanofabrication. Yuan Yang, Assistant Professor of Materials Science and Engineering in the Department of Applied Physics and Applied Mathematics at Columbia Engineering, was consulted during the progress of this research.
The study was funded by DARPA YFA (Defense Advanced Research Projects Agency Young Faculty Award), ONR YIP (Office of Naval Research Young Investigator Program), AFOSR MURI (Air Force Office of Scientific Research Multidisciplinary University Research Initiative) on metasurfaces, Army Research Office, and NSF EPMD (Electronics, Photonics, and Magnetic Devices) program.

Wednesday, September 28, 2016

The Fourier Transform

Excerpt from Science & Mathematics nautil.us

From article by Aatish Bhatia is a recent physics Ph.D. working at Princeton University to bring science and engineering to a wider audience. He writes the award-winning science blog Empirical Zeal and is on Twitter as @aatishb.



 What was Fourier’s discovery, and why is it useful? Imagine playing a note on a piano. When you press the piano key, a hammer strikes a string that vibrates to and fro at a certain fixed rate (440 times a second for the A note). As the string vibrates, the air molecules around it bounce to and fro, creating a wave of jiggling air molecules that we call sound. If you could watch the air carry out this periodic dance, you’d discover a smooth, undulating, endlessly repeating curve that’s called a sinusoid, or a sine wave. (Clarification: In the example of the piano key, there will really be more than one sine wave produced. The richness of a real piano note comes from the many softer overtones that are produced in addition to the primary sine wave. A piano note can be approximated as a sine wave, but a tuning fork is a more apt example of a sound that is well-approximated by a single sinusoid.)
Now, instead of single key, say you play three keys together to make a chord. The resulting sound wave isn’t as pretty—it looks like a complicated mess. But hidden in that messy sound wave is a simple pattern. After all, the chord was just three keys struck together, and so the messy sound wave that results is really just the sum of three notes (or sine waves).
Fourier’s insight was that this isn’t just a special property of 
musical chords, but applies more generally to any kind of repeating 
wave, be it square, round, squiggly, triangular, whatever. 
The Fourier transform is like a mathematical prism—you feed in a
wave and it spits out the ingredients of that wave—the notes 
(or sine waves) that when added together will reconstruct the wave.
If this sounds a little abstract, here are a few different ways of 
visualizing Fourier’s trick. The first one comes to us from
 Lucas V. Barbosa, a Brazilian physics student who volunteers 
Wikipedia, where he goes by “LucasVB.”
the Fourier transform is a recipe—it tells you exactly how 
much of each note you need to mix together to 
reconstruct the original wave.
And this isn’t just some obscure mathematical trick. The
 Fourier transform shows up nearly everywhere that waves do. The ubiquitous MP3 format uses a variant of Fourier’s trick to
 achieve its tremendous compression over the WAV (pronounced 
“wave”) files that preceded it. An MP3 splits a song into short 
segments. For each audio segment, Fourier’s trick reduces the 
audio wave down to its ingredient notes, which are then stored in 
place of the original wave. The Fourier transform also tells you how 
much of each note contributes to the song, so you know which
 ones are essential. The really high notes aren’t so important 
(our ears can barely hear them), so MP3s throw them out, 
resulting in added data compression. Audiophiles don’t like MP3s
 for this reason—it’s not a lossless audio format, and they claim 
they can hear the difference.
song. It splits the music into chunks, then uses Fourier’s trick to 
figure out the ingredient notes that make up each chunk. It then 
searches a database to see if this “fingerprint” of notes matches 
that of a song they have on file. Speech recognition uses the same 
Fourier-fingerprinting idea to compare the notes in your speech 
to that of a known list of words.
You can even use Fourier’s trick for images. Here’s a great 
video that shows how you can use circles to draw Homer Simpson’s
 face. The online encyclopedia Wolfram Alpha uses a similar idea 
to draw famous people’s faces. This might seem like a trick you’d 
reserve for a very nerdy cocktail party, but it’s also used to 
compress images into JPEG files. In the old days of Microsoft 
Paint, images were saved in bitmap (BMP) files which were a long 
list of numbers encoding the color of every single pixel. JPEG is 
the MP3 of images. To build a JPEG, you first chunk your image 
into tiny squares of 8 by 8 pixels. For each chunk, you use the same
 circle idea that reconstructs Homer Simpson’s face to 
reconstruct this portion of the image. Just as MP3s throw out the 
really high notes, JPEGs throw out the really tiny circles. The 
result is a huge reduction in file size with only a small reduction in 
quality, an insight that led to the visual online world that we all 
love (and that eventually gave us cat GIFs).
How is Fourier’s trick used in science? I put out a call on 
Twitter for scientists to describe how they used Fourier’s idea
 in their work. The response astounded me. The scientists who 
responded were using the Fourier transform to study the 
vibrations of submersible structures interacting with fluids, to 
try to predict upcoming earthquakes, to identify the ingredients 
of very distant galaxies, to search for new physics in the heat 
remnants of the Big Bang, to uncover the structure of proteins from 
X-ray diffraction patterns, to analyze digital signals for NASA, 
to study the acoustics of musical instruments, to refine models 
of the water cycle, to search for pulsars (spinning neutron stars),
 and to understand the structure of molecules using nuclear 
magnetic resonance. The Fourier transform has even been used to
 identify a counterfeit Jackson Pollock painting by 
deciphering the chemicals in the paint.
Whew! That’s quite the legacy for one little math trick.

Wednesday, September 21, 2016

Introduction to Control Systems

For a simple introduction to Control Systems refer to the page below

https://www.facstaff.bucknell.edu/mastascu/eControlHTML/Intro/Intro1.html


Evaluation of Control Systems

Analysis of control system provides crucial insights to control practitioners on why and how feedback control works. Although the use of PID precedes the birth of classical control theory of the 1950s by at least two decades, it is the latter that established the control engineering discipline. The core of classical control theory are the frequency-response-based analysis techniques, namely, Bode and Nyquist plots, stability margins, and so forth.
In particular, by examining the loop gain frequency response of the system in Fig. 19.1.9, that is, L( jw) = Gc( jw)Gp( jw), and the sensitivity function 1/[1 + L(jw)], one can determine the following:
  1. How fast the control system responds to the command or disturbance input (i.e., the bandwidth).
  2. Whether the closed-loop system is stable (Nyquist Stability Theorem); If it is stable, how much dynamic variation it takes to make the system unstable (in terms of the gain and phase change in the plant). It leads to the definition of gain and phase margins. More broadly, it defines how robust the control system is.
  3. How sensitive the performance (or closed-loop transfer function) is to the changes in the parameters of the plant transfer function (described by the sensitivity function).
  4. ThefrequencyrangeandtheamountofattenuationfortheinputandoutputdisturbancesshowninFig.19.1.10 (again described by the sensitivity function).



    Digital Implementation
    Once the controller is designed and simulated successfully, the next step is to digitize it so that it can be pro- grammed into the processor in the digital control hardware. To do this:
    1. Determine the sampling period Ts and the number of bits used in analog-to-digital converter (ADC) and digital-to-analog converter (DAC).
    2. Convert the continuous time transfer function Gc(s) to its corresponding form in discrete time transfer func- tion Gcd(z) using, for example, the Tustin’s method, s = (1/T)(z 1)/(z + 1).
    3. From Gcd(z), derive the difference equation, u(k) = g(u(k 1), u(k 2), . . . y(k), y(k – 1), . . .), where g is a linear algebraic function.
      After the conversion, the sampled data system, with the plant running in continuous time and the controller
    in discrete time, should be verified in simulation first before the actual implementation. The quantization error and sensor noise should also be included to make it realistic.
    The minimum sampling frequency required for a given control system design has not been established ana- lytically. The rule of thumb given in control textbooks is that fs = 1/Ts should be chosen approximately 30 to 60 times the bandwidth of the closed-loop system. Lower-sampling frequency is possible after careful tuning but the aliasing, or signal distortion, will occur when the data to be sampled have significant energy above theNyquist frequency. For this reason, an antialiasing filter is often placed in front of the ADC to filter out the high-frequency contents in the signal.
    Typical ADC and DAC chips have 8, 12, and 16 bits of resolution. It is the length of the binary number used to approximate an analog one. The selection of the resolution depends on the noise level in the sensor signal and the accuracy specification. For example, the sensor noise level, say 0.1 percent, must be below the accuracy spec- ification, say 0.5 percent. Allowing one bit for the sign, an 8-bit ADC with a resolution of 1/27, or 0.8 percent, is not good enough; similarly, a 16-bit ADC with a resolution. 0.003 percent is unnecessary because several bits are “lost” in the sensor noise. Therefore, a 12-bit ADC, which has a resolution of 0.04 percent, is appropriate for this case. This is an example of “error budget,” as it is known among designers, where components are selected economically so that the sources of inaccuracies are distributed evenly.
    Converting Gc(s) to Gcd(z) is a matter of numerical integration. There have been many methods suggested, some are too simple and inaccurate (such as the Euler’s forward and backward methods), others are too com- plex. The Tustin’s method suggested above, also known as trapezoidal method or bilinear transformation, is a good compromise. Once the discrete transfer function Gcd(z) is obtained, finding the corresponding difference equation that can be easily programmed in C is straightforward.

    Once the controller is designed and simulated successfully, the next step is to digitize it so that it can be pro- grammed into the processor in the digital control hardware. To do this:
    1. Determine the sampling period Tand the number of bits used in analog-to-digital converter (ADC) and digital-to-analog converter (DAC).
    2. Convert the continuous time transfer function Gc(s) to its corresponding form in discrete time transfer func- tion Gcd(z) using, for example, the Tustin’s method, (1/T)(− 1)/(1).
    3. From Gcd(z), derive the difference equation, u(kg(u(− 1), u(− 2), . . . y(k), y(– 1), . . .), where is a linear algebraic function.

      After the conversion, the sampled data system, with the plant running in continuous time and the controller
    in discrete time, should be verified in simulation first before the actual implementation. The quantization error and sensor noise should also be included to make it realistic.
    The minimum sampling frequency required for a given control system design has not been established ana- lytically. The rule of thumb given in control textbooks is that f1/Tshould be chosen approximately 30 to 60 times the bandwidth of the closed-loop system. Lower-sampling frequency is possible after careful tuning but the aliasing, or signal distortion, will occur when the data to be sampled have significant energy above theNyquist frequency. For this reason, an antialiasing filter is often placed in front of the ADC to filter out the high-frequency contents in the signal.
    Typical ADC and DAC chips have 8, 12, and 16 bits of resolution. It is the length of the binary number used to approximate an analog one. The selection of the resolution depends on the noise level in the sensor signal and the accuracy specification. For example, the sensor noise level, say 0.1 percent, must be below the accuracy spec- ification, say 0.5 percent. Allowing one bit for the sign, an 8-bit ADC with a resolution of 1/27, or 0.8 percent, is not good enough; similarly, a 16-bit ADC with a resolution. 0.003 percent is unnecessary because several bits are “lost” in the sensor noise. Therefore, a 12-bit ADC, which has a resolution of 0.04 percent, is appropriate for this case. This is an example of “error budget,” as it is known among designers, where components are selected economically so that the sources of inaccuracies are distributed evenly.
    Converting Gc(s) to Gcd(z) is a matter of numerical integration. There have been many methods suggested, some are too simple and inaccurate (such as the Euler’s forward and backward methods), others are too com- plex. The Tustin’s method suggested above, also known as trapezoidal method or bilinear transformation, is a good compromise. Once the discrete transfer function Gcd(z) is obtained, finding the corresponding difference equation that can be easily programmed in C is straightforward.

    Finally, the presence of the sensor noise usually requires that an antialiasing filter be used in front of the ADC to avoid distortion of the signal in ADC. The phase lag from such a filter must not occur at the crossover frequency (bandwidth) or it will reduce the stability margin or even destabilize the system. This puts yet another
    constraint on the controller design.


    ALTERNATIVE DESIGN METHODS 


    Nonlinear PID
    Using nonlinear PID (NPID) is an alternative to PID for better performance. It maintains the simplicity and intu- ition of PID, but empowers it with nonlinear gains. The need for the integral control is reduced, by making the proportional gain larger, when the error is small.


    Controllability and Observability. Controllability and observability are useful system properties and are defined as follows. Consider an nth order system described by
    x = Ax + Bu, z = Mx
    where A is an n × n matrix. The system is controllable if it is possible to transfer the state to any other state in finite time. This property is important as it measures, for example, the ability of a satellite system to reorient itself to face another part of the earth’s surface using the available thrusters; or to shift the temperature in an industrial oven to a specified temperature. Two equivalent tests for controllability are:
    The system (or the pair (A, B)) is controllable if and only if the controllability matrix C = [B, AB,..., An1B] has full (row) rank n. Equivalently if and only if [siI A, B] has full (row) rank n for all eigenvalues si of A.
    The system is observable if by observing the output and the input over a finite period of time it is possible to deduce the value of the state vector of the system. If, for example, a circuit is observable it may be pos- sible to determine all the voltages across the capacitors and all currents through the inductances by observ- ing the input and output voltages.


    Eigenvalue Assignment Design. Consider the equations: x ̇ = Ax + Bu, y = Cx + Du, and u = p + kx. When the system is controllable, K can be selected to assign the closed-loop eigenvalues to any desired locations (real or complex conjugate) and thus significantly modify the behavior of the open-loop system. Many algo- rithms exist to determine such K. In the case of a single input, there is a convenient formula called Ackermann’s formula
    K = −[0,..., 0, 1] C1 ad(A)
    where C = [B, . . . , An1B] is the n × n controllability matrix and the roots of ad(s) are the desired closed-loop  eigenvalues.

    Refer link below
    https://www3.nd.edu/~pantsakl/Publications/348A-EEHandbook05.pdf