• Google
    This Blog Web

October 2007

Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      

  • CRN

RSS Feed

Email Feed



  • Powered by FeedBlitz

Achieving What We Want

Twotracks2

In case you haven't seen it, I want to point out this interesting and challenging article from Dale Carrico's Amor Mundi blog. He's talking about the fallacy of expecting molecular manufacturing (along with other potentially transformative technologies) to automatically overcome and leap beyond social, governmental, and economic hurdles and achieve a desired beneficial -- or even Utopian -- outcome:

One cannot point out too many times, for example, that neither "nanotechnology" nor "automation" will one day magically cut or circumvent the basic impasse that inaugurates politics: namely, that we share a finite world with an ineradicable diversity of peers with different stakes, different aspirations, different capacities on whom we depend for our flourishing, from whom we can count on betrayal, misunderstanding, and endless frustration, and with whom we want to be reconciled.

The simple truth is that abundance is already here, already within our grasp (just like war is over... if you want it), and so, it is the defense of injustice in the name of championing parochial prosperity that is the threat to the arrival of the available abundance worth having.

If new cheap robust sustainable materials modified at the nanoscale or new cheap robust sustainable products manufactured via nanoscale replication in whatever construal actually were to arrive on the scene, these would contribute to general welfare and prosperity only if that is the value that defines the societies in which these technodevelopmental outcomes made their appearance. Otherwise, they absolutely would not.

The crux of Dale's argument, I think, is that such simplistic approaches, if applied as policy, could open us up to significant risks. Namely, that an emphasis on technological development as a solution, rather than as a tool to be implemented in the context of broad societal goals, could backfire, not only missing the intended positive outcome but in fact enabling unwanted negative results.

If one wants to arrive at something like the Superlative outcome of "Nanosantalogical" superabundance, what one should be fighting for is to protect and extend democracy, to implement steeply progressive taxation, to broaden welfare as widely as possible, and to make software instructions available for free (else they certainly won't be and then Nanosanta will be sure to open his bag only for the rich). If one wants to arrive at something like the Old School Superlative outcome universally automation and Robo-Abundance, what one should be fighting for is to implement a basic income guarantee, otherwise automation (including much that gets called "outsourcing" and "crowdsourcing" in contemporary parlance) will simply function as further wealth concentration for incumbents.

Needless to say, I worry that no small amount of the post-political handwaving of the Nanosantalogical mode of Superlativity derives from a prior commitment to neoliberal assumptions, and functions as a proxy (to return to this post's initial topic) precisely for a worldview that would not in fact be displeased at all with the prospect of such wealth concentration for incumbents or with a stingy Nanosanta with a bag full of toys only for already rich girls and boys. These are the discursive derangements that attract my primary interest when talk of MNT [molecular nanotechnology] goes Superlative.

We would generally agree with this perspective. It is why CRN keeps insisting that a parallel track of investment and effort toward understanding the ethical, legal, and social implications (ELSI) of molecular manufacturing and exploring plans for responsible governance must go alongside and keep pace with technical research and development.

Mike Treder

CRN Home Page
Tags:

Productive Nanosystems Panel: Applications

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Panel abstract/topic:

Work toward productive nanosystems results in new commercial applications at virtually every step. The increasing ability to control matter to atomic precision enables major leaps in power generation and storage, computation density and efficiency, high performance sensors, and materials for aerospace that outperform past achievements by surprising factors. This panel will explore the possibilities from near-term and practical to longer-term and visionary.

Panelists:

Malcolm R. O'Neill, former CTO, Lockheed Martin; and Chairman, Board on Army S&T;, The National Academies
J. Storrs (Josh) Hall, Research Fellow, Institute for Molecular Manufacturing
Papu Maniar, Advanced Materials and Nanotechnology Manager, Motorola
Thomas Theis, Director, Physical Sciences, IBM Research
Moderator: Pearl Chin, President, Foresight Nanotech Institute

[This is a near-transcript -- yes, I do type that fast.]

We're starting with presentations. First, from Josh:

We'll remember the 19th century for the Industrial Revolution. Newcomen steam engine, built just about 300 years ago. The size of a 3-story house, consumed massive amounts of coal. Eventually, with the contributions of James Watt, they began to take over from water wheels. It took almost 100 years to break into a significant paradigm shift position in technology (from vacuum-driven to high-pressure-steam driven) which enabled locomotives and required machine tools. There was a synergistic effect from more than one segment of the economy which created a never-before-seen economic mode.

The racing chain saw (used in lumberjack competitions) has the same horsepower as the Newcomen steam engine, but it's handheld. Horsepower per pound forms more or less an exponential trajectory for 300 years, through steam engines, gas piston, gas turbine... the curve projects molecular power mills at 10^15 watts per cubic meter in 2050. [There's a missing engine technology in the curve, starting about now; if I get a chance I'll ask Josh what it might be.]

Moore's Law. Cray-1: $7M for 133 Mflop. We got 52,600 more op/$ in 30 years; a dual quad Xeon costs $5k for 5 Gflop... and we use it to play Solitaire.

In 2030, will I be able to afford things that cost $7M today? Airliners, factories, hospitals. Can I carry in my pocket things that weigh ten tons today? Houses, trucks, construction equipment. Josh pulls a memory stick out of his pocket--ten times as much memory as a ten-ton memory bank that served a college in 1976.

Nanofactories may enable this. (Don't forget the 2,500 ton aircraft carrier from David Forrest's talk.)

Malcolm:

Aerospace is interested in atomically precise manufacturing: Lockheed is spending $20-25(?) M per year on nano. APM promises a fundamental change in how we think about making things:
Smaller volume, lower weight, potentially lower cost, stronger lighter materials, higher energy propellants, higher performance reliability capability and quality.

Define APM: Lots of different definitions. Lots of implications. We can make anything we want to make with any properties we can get out of the best materials. When going to Mars, a factor of 50 weight is the difference between success and failure.

Payoffs in current products, and also yet-to-be-invented products. Lighter weight. Molecular sensors. Smart clothing. Beating Moore's law. (And more...)

In the DoD environment, it's hard to think outside the box. "To some of my friends, graphite epoxy is just black aluminum." Then you need lab tests andprototypes. Then you need demonstrators. 'Show me.' Then upgrade parts of existing systems. Finally, new baseline designs. But this is slow, because a bad piece of equipment can cost lives and national security. So APM technologies will come through commercial applications to the military. In national security, our computers are typically a generation behind.

Long term payoffs means partnerships are needed: industry in close alliances with universities and government labs, primary developers with small startups, to make sure manufacturing gets proper attention and reliability, environmental requirements, etc., are being met.

Tom:

Six years ago, he was asked to tell design automation people what would happen in 50 years in nanotech. So: first 10 years, business as usual, dealing with ever-increasing complexity. Increasing use of synthesis and self-assembly. Organic electronics in niches. Increasing integration of heterogeneious functions (sensors).

10+ years (from six years ago--prediction): Chemically synthesized nano-building blocks replace semiconductor logic and memory devices. Result: Increasing emphasis on redundancy, test-and-repair, and self-repair. We may see little bits of this.

20-50 years: Increasing use of hierarchical self-organization (whatever that meant). IT systems approach biological levels of complexity. (A requirement for APM systems [I disagree--CJP].) We have no clue how to design and verify such systems. Even back then, we were designing self-repairing memories.

Chip-building is the "new industry" and will absorb whatever advances come along.

How big can information technology get? It's 10-12% of the economy at this point. Back in history, people wondered how we could become a manufacturing economy--what would people eat? Today, 3% of economy is agriculture. In future, 98% of economy may be information technology, based on nanotech, probably APM.

Papu:

Nano in mobile devices: electronics, storage, antennas, power, biometrics, camera...

Latest technologies have to be low cost, high volume, quick to commodity. You don't know which phone will take off, at which point you have to make 10,000 to 15,000 phones per day. In 2006, 2.2-2.5 million cell phones were sold per day. By 2010-2011, we'll be selling 5 million phones per day.

How we do R&D; has changed: Ideation to acceptance to commercialization. Ideation takes proof of science to proof of technology. One of a kind isn't enough any more; proof of concept has to include repeatable, scalable, six-sigma. Finally, R&D; equipment has to be matched to older OEM equipment (transfer). Tech prototype has to generate a product prototype; this is proof of value. Product prototype = 300-3000 units. It has to be not only manufacturing qualified, but suited for high volume manufacturing.

You used to have concept IP. Now you have technology IP, which takes it from proof of concept. And manufacturing IP, which takes it from proof of value.

Nano specific challenges: Ideation: Value is application specific. Proof of sciencde is necessary but not sufficient.
Proof of concept and transfer: Very long cycle times to make repeatable setups, because you're forcing nano processes on established equipment.
Proof of value: Value is diluted because product isn't optimized; supply chain isn't ready; risk isn't worth it. Maybe 1 out of 10 nanomaterials makes it into a product.

Nano introduction timelines: 1-3 years: Housing and displays; 3-5 years: energy, storage, RF; 5-10 years: energy, RF, wearables; 10+ years: nano circuits, flexibles.


Pearl: What do the panelists mean by near-term, long-term, visionary?

Tom: Near - evolutionary enhancements on existing tech; visionary: what would be really revolutionary is mfg technologies and machines operating at thermodynamic efficiencies. Precision is a done deal, scaleup has to be worked on, efficiencies depend on ...

Papu: Short term 1-3 years. Long-term 3-7, anything beyond 7 years we have no clue. Cell phones won't look like cell phones in five years.

Malcolm: The chart that Papu showed is exactly the way we see it in aerospace; except we buy rather than make. Investors want to see 3-5 years out. So that's were we use internal funding. Up to 10 years, government funding.

Josh: I and several other nanotech people were invited to the Foundation for the Future in Seattle in 2000. They asked me to talk about what the future was going to be like. They said, "Tell me what the year 3000 is going to be like." I was floored. A thought about design automation: When I was a postdoc in the 90's my group wrote an AI program that could design a complete pipelined microprocessor given a description of the instruction set. That kind of thing is getting better as time goes by. Design and other parts of information economy are moving at Moore's Law growth rate. Right now, APM has one foot in the digital world of the atomic precision, one foot back in the old analog world of the industrial revolution growth rate. Key challenge is to get the synergies right to move the whole thing into the digital growth rate. I discovered that doing things in the real world is a lot harder. When I wire up microprocessors, they work right the first time. When I wire up motor controllers, which are much simpler but carry 500 amps, they blow up. As we go digital, the opposite will happen, which is why digital has the accelerate growth rate that it does.

Malcolm: One of the interesting technologies I don't think you talked about was meso-atmosphere. From 50,000 feet to 100 km, where a satellite can stay in orbit. That's a tremendous range of altitude that no machine occupies. We're exploring that through nanomaterials: lightweight fibers from Akron; power generation and storage; conformal antennae; lightweight materials. That's a system, mission, capability that would be disruptive, revolutionary, all of the above. You wouldn't have to put things in orbit; you'd be above the jet stream; you could stay above one point on the ground virtually forever.

Papu: Motorola has worried about cell phones and mobile comm will change in the next 3 years. Trends we worry about: variables in general, and whether current architecture will be distributed or stay unified.

Tom: My focus is on devices and IT: storing, processing, communicating information. Something that's about to happen in a big way, and most of you aren't aware: there's been a precipitous change, a tremendous increase in the rate of decrease of per-bit cost of solid-state memory. Later this year, Samsung will introduce the world's first large-capacity phase change memory. These can be scaled to the few cubic nanometer range without running into fundamental problems. What's already in the labs tells me this trend of decreasing cost of memory will accelerate or at least continue. We'll have attributes of products that we don't have today. Hard drive companies are all panicking and looking for other business. What needs to happen is something has to replace the silicon transistor. That's nano. That's simply not there; there's lots of handwaving, e.g. non-charge-based devices, spintronic, plasmonic; but everything that's in the lab doesn't have the capability of doing better than the transistor in terms of performance/power. Clock speeds saturated because we can't run faster without using too much power. To move forward, IT has to figure out how to make things work in a nearly reversible fashion. Today, each bit erasure dissipates the bit's energy. If we can develop reversible devices, then processing technologies can go much further, and peta or exaflop devices could fit in your pocket. If we're stuck with the transistor, then we're stuck with 1-5 GHz computers for as long as I can see.

Josh: Is anyone still working on optical computing? (Something about power being too high.)

Tom: We analyzed and never went into it. Plasmonics is the new thing: light couples to smaller waves, lets you miniaturize the devices. So far I haven't seen a device that's better than transistors.

Pearl: Any questions from audience?

Audience: Once we start working with cellular sized machines and we have nanoscale computing elements, what conceptual bridges do we have to cross to create a neural computer interface?

Josh: A lot of software.

Tom: I won't answer that, but the fundamental problem with neural anything is that we don't understand the algorithms the brain runs. We can do things that a computer can't do no matter how long we give it. We basically know that most of what the brain does is pattern recognition; we (meaning the neuroscience community) know this can be mapped to Bayesian inference; trouble is, that's NP-hard; we don't know what approximations the brain uses. And remember the brain does this with only a few (~10) logical operations. If I knew the algorithms, I could have a group implementing them; so you're asking for a breakthrough in algorithm, not nanotech.

Malcolm: [??] will do two studies in 2008; one is neuroscience; there's some very interesting work going on, trying to figure out how the brain waves couple into thought and actions; this is something the National Academies are working on.

Josh: The [??] group at MIT has an architecture that's as good as humans at recognizing dogs.

Audience: What do each of you see as the most important technological development in the next ten years in nanotech?

Josh: I don't know.

Malcolm: I'd hope it would be a fundamental understanding of the potential of APM. Without having achieved it, but at least understanding where we need to go, where to invest, where's the low-hanging fruit.

Tom: That device I described, the device that can exceed the ultimate performance of the transistor, that's most important for IT.

Papu: Mobility. With mobility, we get convergence. Power: We need a portable form factor. Also, we need a display bigger than the device. Third one: health-related diagnostics: mobile device for personal health. That goes to medical/bio sensors.

Tom: Outside IT, and maybe more important than anything in IT, getting the cost of photovoltaics below coal could be biggest. That could happen around 2015.

Pearl: Want to thank every panel speaker.

[... And that wraps up the conference! It's been fun...]

Chris Phoenix

CRN Home Page
Tags:

Commercializable Nanotech Solar Cells

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next up: A Comparison of Nanotechnology-Enabled Photovoltaic Materials and Devices with Near-Term Commercialization Potential
Robert J. Davis, Director, Nanotech West Laboratory, The Ohio State University

Nothing in this talk that wiggles or swims; but it's very useful.

Talk structure: Intro to photovoltaics; nanotech-enabled PV; likely entry points for nano in commercial PV in next five years, and why; overview of a PV research center.

We need to harvest power at cost comparable to fossil-fueled power plants: 6-11 cents/kWh. Solar cells need to drop cost by 75% from $5/peak watt.

PV cell operation: I/V curve is like a diode, with a sharp "knee" at the origin. Under illumination it shifts down and to the right. You want to take off power at the knee; otherwise, you'll waste either voltage or current.

Crystalline and polycristalline silicon is pervasive but expensive at >$5/Wp just for the cell itself; there's been a recent worldwide silicon feedstock shortage; but even without this, can it ever meet cost targets?

Amorphous silicon, thin film deposited: 8% efficiencies, but it might be improved by a-Si:Ge heterojunctions.

Thin film II-VI compounds, CdSe, CdS, CIS, CIGS. These get around 10% efficiencies (polycrystalline) and are lowest cost at this time.

This talk will focus on nanotech in absorber layers, electrode layers.

Absorber development: Multijunction cell based on III-V compounds: use epitaxial [thin, precise] layers of AlGaAs, InGaP, GaAs, InGaAs to approach or exceed 40% efficiency. [BTW, II-VI and III-V refers to the column of the periodic table.]

Also, quantum-dot based absorbers, and nanoparticle precursors for CIGS and other films.

Multijunction cell: GaInP junction, 1.8 eV; GaAs at 1.42 eV; window layer, transparent graded layer to step to InGaAs 1.0 eV. These were recnetly published and are beginning to sample commercially. Details: You need quantum mechanical tunnel junctions to transmit holes and electrons between the layers. Also, you have to manage defects and crystal-lattice strain. Also, these cells are used in concentrator applications so you need to manage extreme thermal issues.

These type of cells were developed for spaceflight applications, but are now being shipped for concentrator applications (500 to 1000 suns). The nice thing about concentrator is that you can use tiny (cm^2) cells. Some cells are being grown on Ge:SiGe wafers for better mechanical properties. Also, nanodots are being added to increase IR usage.

Solar cells based on quantum dots in a matrix are low efficiency (2%).

Nano-ink is used to deposit II-VI. I didn't catch why, but this is likely too expensive for anything but military applications.

Electrode development: Transparent conductive oxides are expensive, hard to put on glass, hard to control stoichiometry (material ratio). But there's considerable work on single-wall nanotubes in polymer. This material is also useful in touch screens and electrostatic protection, so it's probably going to be developed usefully.

So, nanotube electrodes are probably going to be the main early entrance of nanotech into PV. Also, multi-junction may not be pure nano, but may provide an entry point for nanoparticles.

Ohio Wright Center is working on solar cells; trying to capitalize on auto-manufacturing expertise in building big-ticket items out of metal, glass, advanced polymers.

... Yep, he was right, no productive nanosystems here. But a solid interesting talk.

Chris Phoenix

CRN Home Page
Tags:


Solid-State Lighting

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: Molecular Design of Solid State Lighting for Energy Efficiency
Paul E. Burrows, Laboratory Fellow, Pacific Northwest National Laboratory (and over 100 patents)

Artificial lighting was perhaps the most important/irreplaceable result of our first invention, fire.

Candle: 0.05 lumens/watt
Gaslamp: 0.5 lumens/W
Incandescent lightbulb: 15 lm/W (5% efficient)

Artificial light uses vast amounts of electricity. Conventional light, even fluorescent, can never be more than ~20% efficient. 22% of electricity in US, 8% of total energy consumption, $50B per year, 150 MTon of CO2 per year. Most of this is 19th century technology!

Typical lightbulb has a 2800K blackbody spectrum; 95% of the energy is in the infrared. Most is over a micron; the eye responds to ~400-750 nanometers.

Electrons in a semiconductor can only occupy certain levels; thus, you don't get a blackbody spectrum. The photon energy is defined by the bandgap of the semiconductor.

Commercial LEDs can be expected to reach 50% efficiency and possibly more.

Hot off the press: 1,050 lumens in cool white @ 72 lm/W; 4 amps in a mm^2 die, at 150 degrees C. Very impressive!

"Nanotech isn't a length scale, it's a state of mind; how you think about making materials." Design a molecule for a particular function.

Molecular structure determines color; even in films, the molecules interact weakly, so the photophysics is determined by the bonding inside the molecule. Some molecules can hit 130 lm/W. "All you have to do is convince customers that they want green light bulbs in your living room."

Using phosphorescence, not fluorescence, which means you want high triplet exciton energies. This is because of electron injection--he didn't have time to explain this.

So there are some small molecules (three rings) that have ~3 eV energy, but they're too small to have nice material properties. But you can put phosphine oxide to provide a point of saturation, no conjugation past that point, so it isolates the optical part. And it makes the outer wings of the molecule negative - very high band gap (which means you can inject electrons at the right energy, if I understand correctly).

So you can make films of this stuff, thin-film structures just a few hundred nm thick. Konica Minolta has achieved 60 lm/W. With further development, it may achieve 200 lm/W. And it's diffuse light (because you can't put a lot of power in a small area of organic molecules, you'll fry it) (on the other hand, you can easily print the thin film by the square meter) so you don't need lampshades so you save efficiency there too.

Green fluorescent protein is 80% efficient, but the artificial version of the fluorescent part is 1% efficient and has a broader spectrum of light. Why? It flops around; in natural GFP, it's held in the right position by the rest of the protein. Could this be done artificially? Let's hope...

Fluorescent efficiency can be enhanced by a nanoparticle that creates plasmons to couple the energy out of the molecule. But the spacing must be exactly right. Can this be done by engineered molecule? Let's hope...

Electron transfer rate between organic molecules: very small changes in spacing affect the electroincs of the molecule.

Can we design optimized device components that make an efficient light? Perhaps Schafmeister molecules could be a way to make the right spacing.

Great quote: Report from the Select Committee on Lighting by Electricity", London, House of Commons, 1879: Electric lighting has inherent problems and can never replace gas.

We're not hitting the bleaching lifetime of the molecules; moisture kills it; that's a packaging problem.

Clever idea: using inorganic LEDs, you could flicker them fast enough to transmit lots of data (too fast to see) so you piggyback networking (at least half-duplex) on your lights.

Chris Phoenix

CRN Home Page
Tags:

Military Application of Atomically Precise Manufacturing

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: Low Cost, Atomically-Precise Manufacturing of Defense Systems: Progress and Applications
David R. Forrest, Engineer, Naval Surface Warfare Center and President, Institute for Molecular Manufacturing

There are immediate defense applications for nanomaterials... but there's also a long-term vision for atomically precise manufacturing (APM). Beyond that, he wants large atomically precise objects. There are advantages to this...

The roadmap projects we might get APM sooner than we expect: 10-25 years. (With hard engineering work for the next 15 years.) And this is applicable to next generation conventional weapons. But Real Security is something else... above and beyond building lethal weapons.

Current nanotech for defense:
Filtration and biocides
Composite material reinforcement
Multifunctional coatings
Energy and electromechanical systems
(this is sourced from public information, SBIR etc, that's on the web.)

High-efficiency, high-throughput filters: not only remove, but apparently kill pathogens; less clogging; made of alumina.

Composite material enhancement requires functionalized nanotubes (add molecules sticking out so they don't just slip past the polymer). Nanotubes can control electrical properties as well as mechanical (useful for radomes).

Coating: e.g. nickel nanostrands as additive to make paint, resin, etc., conductive. This can make aircraft more resistant to lightning strike. Super hydrophobic surfaces, damage sensing, self-repairing. Nanoparticles in glass coatings for abrasion resistance and anti-fogging.

Energy: nanotube-enhanced ultracapacitors. (May reduce the 20-lb(!) battery weight US soldiers carry.) Nano-crystalline cores improve the efficiency of power transformers.

Better body armor.

Flexible solar cells; nanotubes on silicon increase surface area [does this help light-gathering?]

So that's what's happening today. But we want to get from nanomaterials, to devices, to systems, to atomically precise manufacturing. Just because it's nano doesn't mean it's atomically perfect. Nanotubes are strong because they're defect-free, not because they're nano. So you need atomically precise manufacturing to get the high strength.

Forrest put up a graph of various materials: standard, micro-crystal, and theoretical strength. It was very impressive, how much higher the theoretical strength was. A Gerald R. Ford class carrier, currently weighing 100,000 tons, might be built for 2,500 tons if it were built of atomically precise steel. (In theory.) Rail guns need strong conductive materials... well, atomically precise copper has 80 X the strength.

There are similar implications for toughness, wear resistance, corrosion resistance, fatigue strength, creep strength, and oxidation resistance. [Forrest has ductility on the list too, but I thought ductility was a result of defects.]

Future materials may be very different: for example, Josh Hall's utility fog. Each microscopic particle is a robot that contributes to a truss. Similarly, defense systems might be rethought--might be very different from today's.

We now have a roadmap for atomically precise manufacturing; we need components and systems. Nanoparticles are not on the APM trajectory! The roadmap document includes a list of components: biomotors, DNA-based robotic arm, nanotube nanomotor... that already exist. Take a systems engineering approach and start integrating these things in a systematized way. The range of technologies described in the roadmap--Seeman, Schafmeister, scanning probe synthesis, hybrid techniques e.g. Zettl, simulation of atoms...

Challenges of the transition:
- Low hanging fruit vs. APM leads to bias toward status quo.
- Need acknowledgement that APM is technologically feasible, biologically demonstrable, economically inevitable.
- Commitment: focused R&D.; Success is more about commitment than strategy; the roadmap outlines many paths.
- Industrialization and scaleup... to do the necessary R&D.; Take a systems approach; integrate using standardization.
- Meeting the consequences. There are serious responsibilities accompanying a technology this powerful. Best approach is to address them directly.

Forrest skipped three skeptical questions, because he's short on time. The slide was up for half a second:
Doesn't it need a breakthrough? No, just R&D.; Isn't large-scale nanofabrication impossible? No...

Real Security
- More than just a function of iproved lethal force. It's a function of prosperity. APM can win hearts and minds by providing basic needs, flexible transportation, education, healthful environment...
- Comes from addressing the double-edged sword of technology directly. Prevent surprise, prevent misuse, use embedded safeguards, avoid arms race...

Summary: APM isn't about making small things. Productive Nanosystems makes large things, at improved cost, flexibility, and performance.

Question: How should funding flow? A: It'll come from a number of sources. DARPA, Army, Navy... there's funding on nanomaterials... the roadmap will help, but we need to get their attention.

Chris Phoenix

CRN Home Page
Tags:

Mechanical Molecular Electronics

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Sir Stoddart Speaks: Dr. Fraser Stoddart, winner of the experimental Feynman prize, will be talking about molecular motors. The previous speaker, David Leigh, won the theoretical Feynman prize for molecular motors. So far, Dr. Stoddard is talking about previous prizes, meeting the Queen (she noticed when the announcer at the knighthood ceremony got the word "nanotechnology" wrong), his childhood... he played with jigsaw puzzles and Meccano sets... he learned to like crossword puzzles....

Crossword puzzles are just about words, not grammar or sentences. Chemistry is currently in the pre-sentence stage; it has the potential to write sentences and paragraphs, but we're not close to there yet!

Feynman mentioned that information can be packed very tightly, and computer elements can be shrunk a lot; Feynman talked about the ad-hoc but usually successful process chemists use to figure out how to make molecules.

OK, he's just outlined the technical part of his talk. Two trips down memory lane, rotaxanes and catenanes, switching in those molecules, switching used in computers, defect tolerance, and four or five other topics... trouble is it's now 12:28, and lunch was supposed to start at 12:30.

Anyway... he's giving a history of learning to make catenanes.

After the first catenane that was just a structure, he built one that "liked" to have a certain part of one ring inside the other ring. But when it was oxidized, it rotated out. This could be used to make a computer memory element by sandwiching a monolayer between two electrodes. A linear version (rotaxane) made a better one. Bill Goddard simulated the shape of the molecule and its electronic properties.

They investigated the effects of packing the molecules tighter or looser in their monolayer; investigated the molecule in solution.

They designed a fault-tolerant memory architecture. They built an array of 160,000 bits in the space of a white blood cell: the density in the semiconductor roadmap for 2020.

To impact molecular nanotechnology, we need significantly more complex molecules; templated organic synthesis; physical measurements; computational work; a feedback will happen between making, measuring, and modeling.

My takeaway lesson: you can do cool things with just a few atoms if you design the molecules right.

Chris Phoenix

CRN Home Page
Tags:

Squishy Molecular Motors

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: Dave Leigh, School of Chemistry, University of Edinburgh, UK
Tooling Up for the Nanoworld

Nature already has a nanotechnology: nanomotors and structures and materials and catalysts... all done with molecules.

Lessons to learn from biological machines:

  • Soft not rigid
  • Work at ambient temperatures
  • Utilize chemical energy
  • Work in solution or at surfaces
  • Effect of scale - constant motion
  • Rely on Brownian motion
  • Made by self-assembly
  • Governed by non-covalent interactions
  • Statistical mechanics not Newtonian mechanics
  • Require architectures which restrict degrees of freedom
  • Operate far from equilibrium

[Ooh, I wish I had time to answer these point-by-point! This is basically a direct attack on diamondoid, and each point has an answer. In fact, I've already answered many of them over the past four years in my science essays though I shouldn't take too much credit because the answers have been known for quite a while.]

Random motion can be "harnessed" to do work, if you have a randomizing force, an anisotropic medium, and a "fuel" energy input (or information, which requires energy). [I've never quite been sure why you're said to be harnessing the motion rather than the fuel.]

He talked about several kinds of ratchets: if you change the potential energy "surface" that the particle experiences, then you can make the particle move without directly touching it. Like rolling a marble on a blanket by lifting up parts of the blanket. There are several ways to do it; they look quite simple and intuitive. There may be some reason why it wasn't obvious that these would work except in hindsight, but it's hard to see how they could *not* work given basic conservation laws.

The application to his molecular motors seems to be that the motors aren't moved directly by force, but jiggle themselves into the most "comfortable" position given some change (light, etc) applied to part of the molecule.

A motor that hides or exposes a fluorinated region can make a droplet of liquid move over a surface: movement of millimeters due to movement of nanometers. Kinda' cool, molecules controlling a macroscale object.

Starting with two ring-like molecules strung on a larger ring, he can make the small molecules move in a circle around the ring by hitting the ring they're strung on with two different colors of light in alternation, to bump first one and then the other molecule off its resting place.

There was a discussion of a "Maxwell's Demon" which is a thing that can't exist because it violates the laws of thermodynamics... except that he's built a molecule which squeezes a ring over to one side when you shine light on it... and there's some verbage accompanying the phenomenon which makes it sound counterintuitive. But I can't help suspecting that a different explanation would sound much more intuitive, somewhat less mysterious, and just as cool from an experimental point of view.

Chris Phoenix

CRN Home Page
Tags:

Top-down, all the way down

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: Information Technology: Toward the Atomic Scale
Thomas Theis, Director, Physical Sciences, IBM Watson Research Center

He starts by saying he likes being at a meeting where people are interested in making small things, not just developing new knowledge.

With regard to the semiconductor roadmap, to which the PN Roadmap has been compared: it's headed toward atomic resolution, but it's not focused on it. The semiconductor roadmap is focused on the next generation device (half the area of the current devices). It's good to have a longer-term focus.

He'll be talking about top-down, bottom-up, and integration of them. In real life, there's no purely top-down or bottom-up manufacturing process. And rather than talking about theory of energy vs. information vs. time in manufacturing, he'll talk about "whatever works."

Semiconductors are driving top-down small-dimension manufacturing. Conventional optics (with near-field correction--very expensive) can apparently get down to 22 nm features. [How's that for breaking the diffraction limit!]

In fact, he thinks that "top-down" can be taken all the way to atomic precision. The "millipede" scanning probe array may be used as a lithography tool, not just a storage device. You can write and erase bit patterns millions of times without wearing out the tips. But, despite 6 terabits per square inch, 4096-tip arrays, and hard disk speeds, this will probably not be a product... it'll be outcompeted by solid-state non-volatile memory. But they won't throw it away - they'll try to use it for lithography.

The resolution of the millipede is "quite a bit better than a nanometer." Heat up the tip enough, it evaporates the polymer, so it can do line crossing (without snow-plowing the polymer into the first line). 12-15 nm line width is "no problem."

Controlled electrochemistry with atomic precision: attach gold atoms to pentacene, covalently. Can change the charge state of the pentacene, because it's on an ionic solid (sodium chloride) which can reshuffle itself. This reduces the energy required to do reactions. So you can build some room-temperature-stable structures.

IBM is beating the semiconductor roadmap: air-gap dielectric. Block copolymer can make a coating with very small holes. Those holes can be used as a resist to etch air gaps. (Actually, vacuum, if I understood correctly.)

Carbon nanotube FET (field effect transistor). They coat a nanotube with insulator and metal, put electrodes around it, and voila.

DNA shapes are a possibility for future circuit-building; maybe not in 10 years.

Summary: Practical manufacturing will increasingly incorporate "bottom-up" chemistry.

Question: Do you see hydrogen depassivation scaling to wafer-scale? A: [Basically, probably not, but it may well be useful for other things.]

Chris Phoenix

CRN Home Page
Tags:

Nano Investment in Singapore

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: Nanotechnology in Singapore: Towards Atomic-Scale Manufacturing
Khiang Wee Lim, Executive Director, Institute of Materials Research and Engineering (IMRE), Singapore

Drexler just gave this guy a very positive introduction. The talk is going to be less technical - talking about investment (public and private) and international participation in the Roadmap.

Singapore is at 2.36% R&D; as percentage of GDP. They aim to grow that to 3% by 2010. As a small country they have less inertia. They want to add more boxes to the National R&D; Framework.

One of a dozen "Technology Scan Areas" is "Exploiting Nanotechnologies."

A slide of "Nanotechnology Industry Strategy" with a lot of text, including things like IP strategy... I apologize to the business people who are interested in this stuff, but I guess I'm just not. .... Oookay, now he's talking about hard disks and value chains... this may be a pretty short post.

OK, that's interesting: under "Data Storage Institute" he lists "Femto slider." That's 10^-15. Clearly not length; maybe volume? If so, it's a length of 10^-5, which is not that small after all.

The nanoelectronics programme shows sub-20 nm devices starting in 2006. That is certainly interestingly small. If nothing else, it should drive a market for nanoscale lab equipment such as microscopes.

He's talking about technologies, but at such a high level that I can't tell how interesting they are. For example, sequential imprinting to control surface properties of polymer films: lotus leaf self-cleaning, etc. Gears: MEMS? NEMS? molecular? Theory or experiment? I can't tell, but he just said the purpose of the work is to demonstrate that mechanical concepts can be translated down to the molecular level. Hm, looks like he's built an actual gear-like molecule, and tried to pin it to defects on a gold surface. Looks quite interesting, but he skipped past the only slide with technical words on it, too fast to see.

Now he's talking about Zyvex-type atomically precise manufacturing. Including "Vertical sidewalls" for 3D silicon structures. That's cool; it means you can build tall things, not just pyramids.

Back to investment... private companies... embedded ID tags for anti-counterfeiting. They thought it would be used on cell phones and Gucci bags; the biggest market turned out to be a company that made automobile air conditioners... that were counterfeited so well that the company couldn't tell which of the warrenty returns were theirs.

Roadmap considerations: Risk, standardization, multiple countries, health&safety; issues, etc. Samsung has a washing machine that injects silver ions into wash loads to kill germs; the US EPA has ruled that this is a pesticide and gets regulated along with bug spray.

Countries getting into nanotechnology are hugely diverse. China is an early mover on standards; Taiwan on certification. In Taiwan, "nano" is positive, an advertising point, so the government wants to protect consumers by making sure that "nano" actually contains nano.

Risk framework from IRGC: two frames of reference: Frame 1: Passive things pose e.g. human health, explosion, ecological risks - known types. Frame 2 (active, integrated, & heterogeneous nanosystems) are said to pose new kinds of risk (oops, the slide went away). [I'm thinking most Frame 2 nanostructures won't actually be that interesting.]

By the way, they announced yesterday that the slides from the talks would be put up on a website in the next few weeks; assuming that happens, we'll post the URL on this blog.

Chris Phoenix

CRN Home Page
Tags:

Shrinking Electronics

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: "Atomic-Scale Device Fabrication in Silicon"
Michelle Simmons, School of Physics, University of New South Wales, Australia

Michelle will be talking remotely from Australia, about making silicon electronic devices at the atomic scale.

In 2020, Moore's Law says we'll be at the atomic scale. We'll need deterministic doping [putting atoms where they will affect the electronic properties of the silicon]. We'll also need atomic level control of the interfaces between different materials.

The plan is to use single phosphorus atoms as quantum dots.

Doping error [variance] is the square root of the number of atoms. So if you have 10,000 atoms, the error is 100, which is 1%. But with 100 atoms, the error is 10, or 10%. That means the threshold voltage is not reproducible between transistors.

Quantum effects dominate at this scale [for electrons, not atom position!] - can this be used?

Silicon atoms in a surface can't be moved around the way Eigler moved metal-on-metal atoms; the silicon atoms are strongly bonded. [But Oyabu did manage to remove and replace single atoms; but that's more cumbersome than being able to push them around the surface.]

The goal is to make atomic features. Remove hydrogen atoms from a hydrogen-terminated surface, deposit phosphorous-containing gas, heat it to incorporate the P in the surface, then deposit more silicon on top, then deposit electrodes above the buried dopant atoms.

To understand what the microscope was seeing, when looking at PH3 gas on the surface, they had to calculate the energy level of lots of different configurations, then use density functional theory to simulate what it would look like to the microscope... then they could go back and identify surface features from the microscope image.

By calculating what the phosphorous does as it loses hydrogens, and how it incorporates itself in the silicon surface, they can now place P atoms with atomic precision.

She's talking about amount of phosphorous vs. temperature. That doesn't sound atomically precise. I guess it's a different research direction.

It's possible to see buried 7-nm-wide wires reflected in the electronic properties at the surface.

It's possible to count the dopant atoms. And then, by building a Hall effect structure, count the charge carriers - and each atom creates a charge carrier. Similarly, they can demonstrate that the STM tip can completely remove the H protectant and let all possible P in.

OK, this next thing is really cool. They can build structures narrow enough to affect quantum effects. It goes like this: if electrons are able to make a coherent quantum loop between dopant atoms, they go around the loop in both directions at once, interfere, and are blocked; this increases the electrical resistance. Lower temperature and magnetic field allow bigger coherence length, bigger loops, and thus more resistance. But if they build a sufficiently narrow wire, then the biggest loops can't form, and the resistance doesn't spike as high. They can calculate the size of the loops that didn't form, and it matches the width of the wire they built.

Something cool that I didn't catch while writing the previous paragraph: ordered dopant vs. random dopant leads to an Arrhenius voltage curve... or something like that.

They can make single silicon dioxide layers by depositing a layer of oxygen, then a layer of silicon (at low temperature), then heating it.

They can build circuits using combinations of surface electrodes and buried gates.

They're working on nanoscale MOSFETs, 3D transistor architectures, atomically precise resonators, silicon-based quantum computers.

My observation: Although this is not moving/mechanical nanostructures, it is an example of atomic-precision fabrication. It's mainstream, it's semiconductors (which means that there'll be commercial attention), and it leaves no doubt that stable single- and multiple-atom, atomic-precision structures are being built by scanning probe microscope. This should go a long way to blunting claims that atomic precision fabrication is impossible on either practical or theoretical grounds.

Chris Phoenix

CRN Home Page
Tags:

Simulating Cells

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: "Biological and Nanoscale Systems"
by Mitchel J. Doktycz, Research Staff, Oak Ridge National Laboratory

He'll be talking about how biological nanosystems fit into the Roadmap. Biological nanosystems are atomically precise structures made using atomically precise technology. But how do they become a functional nanosystem?
- How is the system engineered?
- How is information processed to balance material synthesis and energy production?
- Why are the components the size that they are?

So biology is already at Atomically Precise Manufacturing--but we don't know how they're engineered.

What's exciting for a bio person is that nanotech lets us engineer tools at the molecular and macromolecular scale, so we can interact directly with the lowest level of biology.

Subtle point: biology is very hierarchical: molecules dictate higher-level function. In nanotech, there isn't a hierarchy of control (yet) (that would need engineering we don't know how to do).

There's a 1/4 power law in organisms: life span lengthens and metabolism slows in proportion to the quarter power of an organism's mass. This is caused by the physics and geometry of transport, and the cell being the fundamental unit. [I think he means that cells don't change size much in different-sized organisms, so transport has to work harder in bigger organisms.]

Cells have much higher power density and energy density than batteries.

Cells have high functional density. An E.coli cell is about 2 microns. It contains about 50 million molecules. Diffusion becomes reasonable: any two molecules meet each other every second in a micrometer-sized volume. [But that doesn't guarantee that they'll meet in the right orientation to interact! -CP] A few dozen molecules can form a concentration gradient. Cells have to trade off between energy, information, and material functions.

Mimicking cells: "What I cannot create, I do not understand" -- Feynman. Through design of synthetic nanosystems, try to understand cells. So a cell mimic: a network of molecules inside a membrane. We want to build this... simpler. Build the membrane by micro and nano fabrication. Build the molecular network by DNA-based instructions fed through cell-free transcription. This lets us start to understand integrated networks and the effects of scale.

... Sorry, I missed a slide. There was an earsplitting alarm from the kitchen which is on the other side of a flimsy wall from the conference room, and it took them several minutes to turn it off.

Nano-fibers can mimic pores. .... Something about ink-jetting onto cells.

Membranes are key. ... Argh, the alarm is back.

Something about anomalous diffusion, where the diffusion constant is time-dependent. Spatial control of diffusion.

Enzyme containment: put horeradish peroxidase inside the cell mimic structure, flow stuff through it.

They can contain DNA, and use it to generate protein by flowing through a "cell free abstract."

Electro-actuation can cause a volume change (30%) which traps 50 nm beads against a wall made of silicon posts coated with polymer. Can even capture and release individual proteins.

Summary: bio systems are a practical model for understanding functional nanosystems. Nanotech provides a platform for examining hypotheses that can't easily be tested in biology.

Question: Eukaryotes have lots of transport and other internal mechanisms; is diffusion enough? Answer: 30% of proteins are trans-membrane [so a lot of things go on separate from internal mechanisms].

I asked about whether the need to bump in the right orientation would mean that the one-a-second encounters resulted in action once every ten, 100, 1000 seconds, or what. Answer: Volume packing(?) increases the encounters/activity. If I'd had time, I'd have asked a follow-up: doesn't that correspondingly slow diffusion? But maybe that was already taken into account in the one-second calculation.

My reaction: it seems that building artificial cells and structures is a useful way to build some kinds of nanosystems, and to research some kinds of phenomena. I'm not sure it'll be very relevant to diamondoid or other high-performance systems. But for diffusion-limited, fluid-drag limited productive nanosystems, the techniques and approaches described here may be quite useful.

Chris Phoenix

CRN Home Page
Tags:

Studying Mechanosynthesis

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: "Single-Atom Manipulation and the Chemistry of Mechanosynthesis" by Damian G. Allis, Research Fellow, ICPRFP; Senior Scientist, Nanorex; and Theorist in Residence, Syracuse University

This should be a very interesting talk, because it's about the kind of reaction that'll be used to build diamondoid structures. He starts by talking about what nanotech used to be--atomic precision mechanical chemistry--before the nanoscale researchers started taking it into the realm of imprecise constructions.

Mechanosynthesis is: Positional control of reactants, control of orientation, asymmetric reactants (e.g. putting a small molecule at a chosen location on a surface), control of environmental conditions. Goal is programmable control of assembly processes, making complex covalent structures that may be inaccessible to ordinary chemistry. [Without mechanical input, it's hard to select between chemically similar reaction sites. Also, mechanical force can create conditions that would be really extreme in ordinary chemistry, such as very high pressure.]

Chemists do their work by changing the electronic properties of atoms within a molecule: internal control, which lets them select reaction sites [with difficulty]. Mechanosynthesis selects locations mechanically and directly.

Supramolecular synthesis: molecular building blocks. Instead of targeting between adjacent atoms, it may be easier to build slightly lumpier (but still precise) constructions out of medium-small molecules, and then only have to select between molecules.

He's showing a 222-carbon graphite sheet--this has actually been synthesized--and talking about how hard it would be to target a particular atom by chemistry, and how much easier by mechanical selection.

Now he's showing complex diamondoid machine parts, and talking about how we hope to figure out synthetic pathways to build them. [Chemists would not be able to build such things.] Is there any evidence we can build such things? Yes... scanning probe microscopes have done chemistry. It's primitive, but so was the first transistor.

Tool tip designs: deposit carbon dimers to build diamond. Or even single atoms. (The ultimate level of control of matter.)

There are various levels of precision when simulating atoms. You want at least Hartree-Fock, if not DFT (density functional theory). That takes a lot more computer time. Today's tool tip talk represents the DFT level.

Designing tool tips... If you stick an atom onto an adamantane, then pull it off, you get a dangling bond. But there are other molecules (AL7 and iceene) that rearrange bonds so nothing dangles. (So it'll take less energy to transfer the atom, which is good (at least up to a point)).

The hardest point of designing a tool tip is defect structure analysis. You have to figure out every way it can rearrange that you don't want. Find the transition states, so you can analyze how likely it is to happen. If it's not going to fall apart, then you have to look at the tooltip-workspace transfer energies. Finally, you do molecular dynamics simulations, to find the mechanical properties of the operation.

... Sorry, but he's talking very fast about things I can't quite follow. I'm not sure whether "hydrogen abstraction" is a good thing or a bad thing at the moment, and what energy states are being analyzed. But overall, he's talking about how to analyze whether the structure will fall apart in certain ways. Even if the defect state is lower energy, the transition state may be high-energy enough that it's hard to get there, so the tool tip will be stable [actually, metastable].

... Something about depositing atoms onto a workpiece at edges and corners, not just into the middle of a surface... Something about transferring atoms between tooltips... He's been running out of time and has been talking even faster for the last five minutes.

Question, something about how the defect structures are found. There's no formal way to generate them; use either chemical intuition, or shake them up (in simulation) and see how they fall out.

Drexler says: his intuition is that there won't be practical applications for these kinds of reactions, in vacuum, for several (tech) generations in the future. But the same kind of analysis can be applied to peptide (protein) bonds in water. Damian agrees. [The Nanofactory Collaboration would probably disagree - they want to develop nanofactory-level technology by direct early use of this kind of mechanosynthesis.]

Chris Phoenix

CRN Home Page
Tags:

Nanophase Materials

Chris Phoenix is providing live blog coverage for us on all the presentations from an important conference on Productive Nanosystems: Launching the Technology Roadmap...


Next talk: "Nanophase Materials. A Persistent Enabler" - Dennis W. Smith, Jr., Department of Chemistry, Clemson University

Welcome to the second day of live-blogging the Productive Nanosystems conference. The first talk is about "recent examples of functional nanosystems related to polymer synthesis and applications in photonics, energy conversion, and renewable materials." (By the way, the agenda for the day can be found on the SME website.)

Dennis does polymer chemistry. A good quote: "Dear Colleague, Leave the concept of large molecules well alone ... there can be no such thing as a macromolecule." Advice given to Hermann Staudinger, future Nobelist, in 1920. Mike Treder starts his talks with a bunch of quotes about how flying machines are impossible and the world only needs five computers. This might be a good quote to add to the list, especially since a diamondoid nanomachine component is essentially a *really* large, highly crosslinked, macromolecule.

A diblock copolymer is two different polymer molecules joined together. They self-assemble into intricate semi-regular structures. Most pictures of them look pretty randomly wavy, but he showed a couple of pictures of "guided self-assembly" with very straight lines and sharp angles. That's cool.

Diblock copolymers may be useful for fuel cell membranes; they can create several different "zones" in the material.

Polymeric nanocomposites are about putting nanoparticles in plastics instead of larger "fillers" that have been used for a while. They can make the plastic work better. Nanotubes can have more interesting chemistry than e.g. carbon black (also a nanoparticle). There's apparently a chemical interaction going on between the nanotube and the polymer (polyaniline) - not just non-covalent interaction. And you can make clear, electrically conductive polymers. In a piezoelectric plastic, 0.05% nanotubes increases performance by seven-fold.

BODA can be turned into aromatic molecules, then reacted with carbon nano-onions (which hasn't been done before, because the onion surface is very graphite-like), which is nice because it solubilizes the onions, and onions are photoreceptive.

Can make carbon nano foams for electrolytes.

Can make photonic materials by putting spheres into an array. You can put polystyrene into rubber(?), making a photonic crystal. Then when you stretch it, the bandgap (color) changes. You can make versions that respond to solvent.

Can build polymers that detect specific anions. Sulfonate membranes to make fuel cells. Functionalize nanoparticles. Build low surface energy materials (nano-roughness) (so dirt doesn't stick) (may also have useful optical properties, I think).

POSS: well-defined molecules that are big enough to be nanoparticles. Fluorinated POSS is especially interesting. They dissolve well in fluoropolymers (e.g. Teflon) and make it easier to handle and give it interesting properties.

So, there's lots of materials and chemicals and substances with tweakable properties. Most of this doesn't seem directly relevant to general-purpose molecular manufacturing, but any little thing may turn out to be useful. I'm guessing that this sort of work will feed into basic science for spinoff designs, rather than bulk polymers being incorporated directly into atom-precise nanomachines.

Chris Phoenix

CRN Home Page
Tags:

Molecular Manufacturing Panel

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


The first day of the Productive Nanosystems conference ends with a panel. Here's a semi-transcript of what was said:

  • Christian E. Schafmeister, Department of Chemistry, Temple University
  • John Randall, Vice President, Zyvex Labs
  • K. Eric Drexler, Chief Technical Advisor, Nanorex
  • Keith Firman, School of Biological Sciences, University of Portsmouth
  • Moderator: James Von Ehr, Founder, Zyvex Group

James: Defend your approach!

John: Covalent materials approach: this approach can build a wide range of materials. Silicon, oxides, and metals have much more tractable design rules than biopolymers. The disadvantage of this approach is that it's serial, hard to scale up a lot. But there are a lot of applications that don't require huge quantities of material. And there's room for exponential scaleup of throughput once initial markets are established. But I wouldn't want to discourage any approach I've heard today.

Christian: Approach based on catalysis: big molecules that make [or join] small molecules; molecules making molecules. Starts out being atomically precise. Biology is all about catalysis. We have about 20,000 different molecular machines in our body; we know that works. If we can develop that sort of control, then we could do all the same things that nature does, nanosystems building nanosystems very cheaply (cell turns into blue whale).

Keith: Biological motors... biology offers us something now, and we should use it now. A driving force of science is to make money. Synthesize large arrays of materials in a different way. Use biology as an engineering tool: the key to the future. It's doable, and it's doable now.

Eric: The ideas I was talking about earlier are strong in part because there are different ways of putting together materials. John mentioned an important concept: design rules. Successful engineering areas have design rules. It's not an experiment to make a new cabin with carpentry. Some areas of nanotech are now at the point of carpentry. When I first talked with Paul Rothemund, I asked him what's hard and what's easy. He shook my faith in experimental work: he said it worked the first time and every time since. That's like carpentry. Elsewhere, the design rule is just "Here is this new thing, you can use it." In looking at new pathways, a trap to avoid: Imagine a pathway, think "you could do it this way [sub-path]," find a flaw, conclude the whole pathway doesn't work. The rate of progress will be determined not by difficult problems, but by the average rate of progress. [Just bypass the most difficult sub-problems because there will be easier alternatives.] There may be thousands of labs, and the fastest one will win--most don't need to succeed at all.

Keith: Negative results are a caustic subject... while fusing proteins, sometimes we get two proteins that change each other's properties. And that's a negative result, and doesn't get published. It shouldn't be lost.

James: Change the topic a bit... With our current national nanotech program (NNI), which is mostly non-atomic-precision, we're spending money to study environmental implications, but some say not enough... as we move toward atomic precision, will we need to study this as much? With better repeatability, will we be able to get by with less study?

John: No, every new technology needs to be looked at. If we can make it, and can understand what its impacts will be through relatively simple experiments, we might be able to do fewer experiments... but it's likely to be unpredictable. One thing that's understood less is how much better regulation is than it used to be. Maybe we're overreacting. Refers to argument that banning DDT has killed millions of people from malaria. We do have checks and balances in place. I'm not so fearful of the new things coming out, because I think we do have an infrastructure to look at these things.

Christian: In relation to the molecules we're planning to make, I'm not that worried about environmental hazards, because they're organic and water soluble - they'll get cleared out of the kidneys very quickly. Matthew's gadolinium molecules were cleared out of the kidneys in minutes. I'd worry more about big greasy molecules like nanotubes; they'll collect in fat cells etc. I'm also not too worried about them interacting with biological machinery. Protein-protein is a surface that has to match another surface. Imagine if you scrambled the patterns on two checkerboards; they'd be very unlikely to match.

Keith: Biomolecules will be antigenic [annoy the immune system]. I was surprised to hear that even carbon nanotubes are antigenic. But I do come from the UK, which has suffered problems recently with science. BSE/mad cow. Genetically modified food. We're very wary now; communication is the key. Science needs to communicate. I think the dangers are less than they think--but we need to tell the public that. Third thing: response of Prince Charles to nanobots. That image didn't do the scientists a lot of good. We're now un-picking that damage, trying to reassure the public that nanobots, even if they exist, won't destroy the world.

Eric: I get a lot of questions about the risk involved in what people are doing today. I answer that that's basically a question of toxicology. There are some new questions of regulation and classification, but it's basically just toxicology. There was an early phase when people said "nanotubes are just graphite so it's not a problem." That era has passed, and that's a good thing. There's been an overreaction, precisely because a thought experiment in my '86 popular book--which was obsolete by '92--was grabbed by sci-fi writers and the popular press, twisted, blown up, distorted, despite my attempts to alleviate this. One reason for this was that there was a lot of excitement about nanotech, and lots of people were saying "What we're doing today is nanotechnology--the whole thing." No distinction between particles and nanobugs. So it all landed on current-day researchers. We're largely past that era too. Looking forward along this pathway, nano is about getting better control of materials. Given regulations and human decency, people will use new capabilities to make better products with fewer downsides. Toxicology problems will fade. New problem will be new weapons; but microtech is leading there anyway, and nanotech won't be qualitatively different.

Audience (someone on the National Materials Advisory Board): Point of information: Environmental health and safety is still a hot topic here in Washington. Workplace, commodities, environment, health... still getting a lot of airtime. There'll be hearings in the near future. It won't cool down in the near future.

Audience: I have trouble thinking about pathways unless I know where I'm trying to get. James has said volume may not be needed for some products. I'd like to know what we should make in volume, and for bonus, in what time frame.

John: Something that's useful: A wall of silicon, a known number of atoms high and wide: metrology standard. But also, nano-stamp can make things with near atomic precision. You could do good things for the optics industry. There's also possibility of molecular interaction structures (membranes?) Also, high-quality oscillators for compact radios.

James: Machine tools are a pretty good application.

Christian: My approach can inherently make large quantities of material. There's a 36-peptide AIDS drug that's made on the ton scale every year. If we can make catalysts in a silica material to soak up CO2, water, sunlight, and create butanol, you could make automobile fuel from a paint-on coating.

Keith: If you're going to build a biological system, use it for biosensing. Biosensing has two components: recognition, and transducer. We're trying to develop a transducer, and combine it in an orthogonal approach with a sensor. Because I'm using DNA within the actuator, DNA is an interesting substance; most of the proteins involved with genetic disorders interact with DNA at some point. (Something I'm not catching about seeing how single molecule drugs interact.)

John: Most of what I said, five years or less.

Eric: I divide applications by complexity and by the value per unit mass. The highest payoff per material is something that gives you unique information; e.g. the sequence of a DNA strand. A step up is something that processes information that isn't unique; e.g. memory. Instead of one memory cell per patch you address, you have 1,000 or more. A step up from there: molecular electronics, a long-term topic: you need a circuit board or some way of organizing the components. Similar category: therapeutic agents; catalysts: you have leverage. That's a sketch of some of the applications I see for structures where you have unit cells a few nm in size, and you get a high payoff from one, or an ongoing payoff from a few. Going forward of course the opportunities broaden.

Audience: Environmental impact topic: Eric, you said there was a concept in your book that you declared obsolete. I'm guessing that's replication. Is replication out because it's unsafe, obsolete, ...?

Eric: It's obsolete. All the factories in the world have the collective capability to make more factories. But there are advantages to specialized equipment passing components around. It's more efficient. Things that copy themselves--making a box that has all the complexity to make all its own components--biology shows it's possible but very far from easy. So there's no roadmap to it because it's not a desirable objective. If someone wanted to go to the effort to make such a thing, and additionally made a processor for materials from the ambient environment... it's hard to see what the motivation would be. It would be unselectively destructive. Usually destruction is intended to be selective, e.g. weapons.

Christian: I proposed something that does replicate. The idea was to have a solution containing components that could completely replicate all its components. But it's driven from the outside; there's a computer that controls it through each step. It would be a mind-boggling challenge to make an autonomous self-replicator.

Eric: History of ideas: The notions in Engines of Creation were early ideas intended to give a proof of concept of a way to get to macroscopic scaleup. The simpleminded thing was to imitate biology. But that wasn't a good idea and we've moved on.

Audience: When are you going to come out with products? I was listening to a panel like this last week; the panel's consensus was "back off, we won't have it any time soon."

John: We'll have products soon. The initial products won't justify the investment, so some patience is required. Going back to Eric's example of the blacksmith that could make his own tools: I heard [?] talk about Babbage being stymied trying to make his Difference Engine because he didn't have precise enough machining. Once we can make atomically precise tools, we'll be able to do a whole lot.

Christian: I'm hoping in the next couple of years to show applications that justify the effort. The challenge now is to find sequences that have the properties we want. Organic chemists are good at making molecules with different shapes - not so good at engineering function. An exception is Fraser Stoddard who will talk tomorrow. But there's a heavy computational element. Molecules aren't like wood - you can't cut them off at any size you want. I'm currently writing software to try to find/design function. I'm hoping a couple of years. But I can't give you a date.

Keith: Today I showed you a hanging-drop system that's a single-molecule sensor. That's two years ahead of schedule. In three years we should have our orthogonal goal. Dual measurement of a single event. That gives very good control.

Eric: I very much hope that Nanorex will make their product [software] available next year.

James: Value of this roadmap?

Audience: Protect/deprotect: Christian, you say you're using only two protect/deprotect, and that's all there is. But DNA uses Watson/Crick binding. Zyvex uses spatial protect/deprotect. Is there a way to combine these?

Christian: There's actually a lot of protective groups--a book this thick. For us, there's really just three good classes: ones that are taken off by base, acid, redox. That's what limits us. There are many others out there, but not with the kind of reliability we need. DNA synthesis does use protective groups.

Keith: Question for Christian: DNA synthesis has an upper length limit; sounds like you're expecting something similar; do you expect DNA breakthroughs will help yours?

Christian: DNA is actually much better than peptide synthesis. I've seen 130-base DNA with very high purity. For us, we need high yields at each stage, and 99% is good. Steve Kent routinely makes 60-mer and 70-mer. I think we can achieve those lengths. It's important for proteins to be big, because they have to fold. We don't have to fold. So we may be able to get away with just building active sites.

Eric: The productive nanosystems that we know of, in biology, are clever, highly tuned, kinetic proofreading. But ribosomes get errors of ~10^-4 per step. DNA: 10^-9.

James: Any final comments?

John: Value of roadmap will be judged by the number of people who read it and try to use it. Value will increase exponentially if we come back and update it.


Chris Phoenix

CRN Home Page
Tags:

Making Nanotubes Useful

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Next talk: "Multifunctional Carbon Nanotube-Based Systems: Linking Synthesis and Function"

David B. Geohegan, Distinguished Research Staff Member, Oak Ridge National Laboratory

Link synthesis to structure; structure to functionality
Goal: Measure nanotube growth; understand macro-scale functionality

Nanotubes have two kinds of property they want to develop: electronic (where they're extremely impressive - up to one milliamp per tube!) and structural (where it's still hard to build high-performance composites).

It's hard to build high-quality tubes in high quantity. And previously, it's been common to synthesize tubes and go directly to building stuff, without characterizing them.

Two ways to synthesize nanotubes: High temperature, which makes high-quality tubes, but also lots of other junk (low purity). Low-temperature synthesis grows tubes on substrates, at high purity, but with defects.

There are fundamental questions, such as: What's the difference between high and low temperature growth? Why can you sometimes grow tubes without catalysts? These questions remain after years of research...

(Why would a curved carbon structure grow above a flat metal surface? Computer study says that a curved carbon flake points the dangling bonds at the edges down into the surface, which is happier.) (In high-temperature synthesis, the tubes don't start to grow until the gas cools some... because the metal clusters that catalyze the growth don't condense until then.) (They found that condensed carbon clusters are consumed by newly condensed particles to grow the tubes.)

No standards of purity exist for carbon nanotubes. They're making single-wall nanotube (SWNT) membranes and measuring their optical properties.

Electric field-induced contrast: an imaging method in an electron microscope that shows how electrons are transported through a nanotube network between electrodes. Very cool! Wish I'd thought of it. Useful to investigate the electrical/electronic properties of nanotubes in polymers.

Then there's another layer of questions involving nanotube-polymer interactions...

There's a long discussion of growing nanotubes in a pulsed-laser reactor. Cool videos of plasma plumes. Lots of observations about the conditions in which various nanotubes grow. "This looks complicated, but it's only about six rate equations."

So, this talk was about studying nanotube growth and properties--for nanotubes grown and/or used in bulk. It's important to understand nanotubes, but this work is not about machines or even electronic circuits built of nanotubes. So I'm not sure that it contributes to our understanding of how to build atomically precise structures that don't happen to be nanotubes. For those interested in nanotubes: In response to a question, he said that he could envision a continuous-flow reactor that grew, centrifuged, cleaned, and sorted the nanotubes into bottles.


Chris Phoenix

CRN Home Page
Tags:

DNA Origami, Extended

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Next talk: William Shih, "From Structural DNA Nanotechnology to NMR Membrane Protein Structure"

We'll hear two stories. First, nanotubes built out of DNA to solve the atomic structure of membrane proteins. Second, using DNA nanotubes to assemble wire-frame cages quite a bit larger than cages built in the past.

Holy grail for the DNA nanotechnology field (founder Ned Seeman's original goal) is to make a hollow crystal of DNA, then bind the target protein into the hollows. But this requires very precise spatial ordering.

(You want proteins arranged in a crystal, because then when you shoot X-rays through them, you can tell the structure. Shih just said that you can use NMR to determine structure, and it doesn't need such precise placement.)

There are (at least) three classes of proteins they're interested in: adrenaline receptors, ion channels, and (I didn't catch it). Membrane proteins are very important as drug targets, but very hard to analyze. It's hard to purify them in the first place, and then to get them to line up is another level of difficulty.

Membrane proteins have lots of methyl groups, which confuse the NMR signal. But you can also determine angular information--if you can make the proteins line up. You want 0.1% of the proteins to be aligned. So you mix them with a dilute liquid crystal. But membrane proteins are stabilized with detergent, which is incompatible with known liquid crystals. So... build a liquid crystal-type thing out of DNA! (A long thin filament.)

Just designing DNA strands that assemble into filaments isn't enough, because you'll get a distribution of lengths. So... use Rothemund's DNA Staple technology. There's enough DNA in the standard strand to build a 400-nm length of six helices. They wanted longer, so built two of these things designed to stick in pairs.

And... they form a dilute liquid crystal. (It exhibits birefringence.) And when mixed with a known protein, they found the signal they expected. Good signal-to-noise ratio. Now they're looking at proteins with unknown structure. This extends the range of NMR from 15(?) to 40 kilodaltons.

Now, the second story: Building arbitrary DNA structures. Of large size: the field has been stuck at 25 nm geometric figures since about 2004. Icosahedron - 30 struts: 100 nm wide. Build it out of three double-triangles. And... it works! (Though there's some squishiness.) It looks symmetric, but each strut has a different sequence. There's about 500 different staples: that's 500 different places to hang some protein.

My conclusion: Nice that they can build big engineered structures. May be useful for research and maybe even for construction of moveable-part nanomachines.


Chris Phoenix

CRN Home Page
Tags:

Building Protein-Based Nanomaterials

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Next talk: Matthew B. Francis, Department of Chemistry, University of California, Berkeley, is talking about New Synthetic Strategies to Build Protein Based Nanomaterials.

When it's time to add functional molecules to proteins, only a few reactions are used these days. Sometimes the reaction is incompatible with the function you're trying to add. Their group is starting with viral capsids. 180 copies of a simple protein shape self-assemble into an icosahedral capsule (capsid) that's hollow. He wants to attach one type of things to the outside of the capsule, another to the inside of the capsule. So that means you have to attach two different things to each protein, in the right position, so the right things end up on the inside and the outside.

He's got a slide up with lots of chemical reactions, showing how molecules can be joined together. "To remind you why you didn't go into organic chem."

Bacteriophage MS2: affects E.coli, harmless to humans. Easy to grow, high yield. Stable to 60 C. Has 2-nm holes. Can be emptied of RNA by soaking at pH 11.8. Stable from pH 3 to 11. In other words, generally useful.

There's a unique amino acid on the inside of the capsule, and there's a reaction that will attach stuff to it.

They studied what happened to their capsule constructions in a rat in a PET scanner. Found that a small molecule was dumped into the bladder in minutes, but their molecular construction, being a lot bigger, wasn't cleared as quickly.

They've found a way to attach antibodies to other proteins. This is very useful for binding those proteins to whatever the antibody can bind to (almost anything). They built a molecule that'll generate a toxic form of oxygen when exposed to light. (Implication: if the antibody was for cancer cells, you could kill the cells in a very targeted way.)

Tobacco mosaic virus is stable at high temperatures, and can be harvested from tobacco plants in very large quantity. TMV can be broken up into rings. Photosynthetic bacteria use ring-structures to position chromophores (light absorbers) for maximum efficiency. They've attached chromophores to TMV rings, and found that light can be transferred from 38 "donor" to each "acceptor" chromophore. That implies that the construction is defect tolerant. They put other chromophore colors, and got 90% efficiency. Finally, they attached ketones to the outside of the TMV (the chromophore wound up on the inside) and attached gold particles to the keytones. The ultimate goal is to convert the light into a chemical transformation: to turn light into electron transfer.

He closed with a mention of modifying "whole cells" which I assume means attaching stuff to cellular proteins. He showed a slide of cells stuck to a surface in a precise line forming cursive letters.

My summary: This is cool stuff. To the extent that you can build nanoscale stuff out of the virus, that's great. In addition, their approach seems applicable to broader protein engineering. More ways to attach stuff to protein molecules is always useful.


Chris Phoenix

CRN Home Page
Tags:

Foresight Prize Winners

The Foresight Nanotech Institute awards prizes each year for people who've made noteworthy contributions to molecular manufacturing: a student prize, a communication prize, and two Feynman Prizes, one for theory and one for experiment (named after the physicist, who talked about atomically precise manufacturing in 1959).

The student prize went to Fung Suong Ou, for "Devices and Machines on a Single Nanowire." He used a combinatorial approach to fabricate one-dimensional structures composed of carbon nanotubes and metal nanowires.

The communication prize was earned by Robert Freitas for his decade-plus of work telling people about the benefits of medical applications of molecular manufacturing. His highly detailed and informative Nanomedicine books are available in full online, as well as Kinematic Self-Replicating Machines.

The Feynman theory prize was won by David A. Leigh, for artificial molecular motor and machine design in the realm of Brownian motion.

The Feynman experimental prize went to Sir J. Fraser Stoddart, for synthesizing molecular machines including a molecular "muscle."

Congratulations to all winners!

CRN Home Page
Tags:

Designing and Building Proteins

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Next talk: Alexsandr Miklos is speaking on Protein Design and Fabrication Automation.

What they do:
Immobilization and functionalization (turn proteins into sensors)
Capacity to engineer novel function (modify existing proteins)
Rapid protein fabrication (build and test)

Why:
Generate larger datasets (more research knowledge)
Produce technologically useful proteins

He starts from pre-folded (known) structures, then modifies them to get new functions. Proteins can bind almost anything; can be bound to fiber-optic cables to make sensors; can signal by fluorescence or electrochemistry; already, proteins on the end of an optical fiber are in clinical trials as a glucose sensor.

Protein from high-temperature organisms can be stable for months at room temperature.

You can apparently take an existing protein, then modify the "pocket" that binds to molecules, so that it binds to the molecule you're interested in. Alexsandr talks very fast, so it's difficult to follow much less blog. But it seems he has a method for simulating lots of different pocket configurations, winnowing down the possibilities, and evaluating the remainder.

Then it's time to build and test the proteins in the lab. Rather than building DNA strands from scratch, there's a way to use PCR to stitch together smaller snippets. A robot can build a 96-well plate, doing 1440 fluid-handling steps, in 2.5 hours.

There are clever ways to purify proteins that I didn't catch; something about a chemical that binds to a certain sequence of amino acids, Cys-Cys-X-X-Cys-Cys. But once you've built them (which takes only four days) you can almost immediately evaluate them.

This is useful stuff -- a likely enabling technology for bio-based pathways to molecular manufacturing.


Chris Phoenix

CRN Home Page
Tags:

Drexler On the Roadmap

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Sixth talk: Eric Drexler. Drexler is the one who started the idea of molecular manufacturing back in the mid-1980's.

The general focus of the Roadmap is on atomically precise technologies, not productive nanosystems. That's because the former is a necessary foundation for the latter. To engage researchers and encourage development, the roadmap focuses on the former. It provides merit criteria and metrics for research today. When selecting between proposals, look for atomic precision. Look for size, range of materials, other criteria that we'll probably hear about later in the talk.

The Roadmap looks toward advanced manufacturing (what physics says should be possible), but focuses on accessible productive nanosystems (such as ribosome-like systems).

Quantity of material is important: with tiny manufacturing capacity, you can make a few sensors. With large-scale manufacturing, you can address things like global warming. It's important to look at scenarios where the roadmap succeeds in developing such objectives. But for now, focus on near-term things.

Near-term, there are several kinds of atomically precise things we can build. One is biopolymers: protein, DNA. Very large design space available here. But proteins are hard to design. Proteins are not squishy and soft, like meat - that's mostly water. Think of cow horn, silk... protein could have the properties of epoxy. Proteins are useful for catalysts, precise alignment...

DNA doesn't have as large a range of functions as proteins. You can make mechanical structures with it. 3D structures, 2D structures with complex edges. NanoRex is working on structural DNA design using Paul Rothemund's "staple" approach. So you can design a million-atom, 100-nm diameter, atomically precise, 3D structures. If you had a DNA synthesizer in-house, you could design a structure and build it in one day... 50 billion copies. This appears to be useful for building circuit boards. Zinc finger proteins can bind to specific DNA sequences, which implies you can attach things to these DNA structures.

Another class of precise things is specialized structures, where each one has to be synthesized separately. These are non-modular and tend not to have a lot of design freedom. But the range of function is almost unlimited: catalytic, electronic, mechanical, optical...

So the goal of all this capability (bought with multiple $billions) should be to integrate these components to build systems with hundreds to thousands of distinct 3D components, using atomically precise scaffolding and binding elements. Biology has this kind of integration: protein with nucleic acid with other stuff.

New topic: Advances in production technology. Type 1 advances build better products. In Type 2, the products include improvements to the production system, which can enable further improvements. So we really want better productive machines that can build better productive machines... This appears to be an argument for using nanosystems as the means of production of nanosystems.

Today, tools build tools that build tools... traceable back to blacksmithing. The tool that extruded your breakfast bagel is a leaf on this tree. The advanced APM tree has a "Mark II Ribosome" low on the trunk, and "Macroscale APM" high on the trunk, with "Dollar-per-kilogram fab" among the leaves. People tend to assume that things high in the tree are proposals for next year, "which would be absurd."

The Roadmap talks about cross-linked organic structures. An idea that arose pretty late is mixed covalent-ionic bonding. Titanium dioxide, quartz. This may be closer than what's been looked at more closely.

The role of roadmapping: Developing the knowledge and confidence necessary for coordinated system development. So the Productive Nanosystems roadmap should show what's necessary, when, how to coordinate and schedule developments. Avoid chicken-and-egg problems that lead to slow incremental progress.

DNA currently costs dollars per milligram. There's no point in thinking about kilogram-scale structures... but there's a researcher who has an idea for making DNA at dollars per kilogram... but why should he do it when there's no market for kilograms of DNA? This is a real example: it seems that DNA might actually get vastly cheaper.


Chris Phoenix

CRN Home Page
Tags:

Computing for Productive Nanosystems

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Fifth talk: James Davenport, Director, Computational Science Center, Brookhaven National Laboratory.

We need computational tools which take account of atomistics. Atomic resolution is critical. We're not dealing with big enough systems! But massively parallel petaflop systems are becoming available.

A single 5-nm dot of material has about 5,000 atoms. A 7-nm cube has about 40,000 atoms. You can make cobalt single-atom lines on platinum with a shallowly stepped surface. Quantum dots have interesting optical properties, and changing the size in a small way changes the optical properties. Biological effects can depend on size. Proteins can have thousands of atoms: they're nanoscale systems. Note that to study a protein in simulation, you need to add water: tens of thousands of atoms.

A roadmap for simulation has to deal with mixed length and time scales. You will be dealing with petaflop systems, which the simulation community doesn't yet know how to deal with. Software needs to be interoperable. Data sharing needs standards. (This would also help with integrating experimental results.) Data storage and retrieval is a problem: large amounts of data are generate by both experiment and simulation.

Hierarchy of tools, smallest to largest scale:

  • Quantum mechanical (ab-initio, DFT)
  • Car-Parrinello or first principles molecular dynamics (MD)
  • Force field MD (AMBER, CHARMM) [This is what's used in a lot of nanomachine simulations]
  • Heisenberg magnets
  • Nanoparticle-nanoparticle
  • Continuum (elasticity, micromagnetics)

Quantum simulations using the Schrodinger equation are basically impossible for multi-atom simulations. But there are approximations (e.g. Hartree-Fock) that are not very expensive, but pretty accurate.

There's a lot of discussion about properties emerging from lower-level simulations, "You've probably all heard of the program Guassian," the ability to study magnetism but not the temperature dependence of magnetism... I'm not going to try to report on this.

Bulk gold is inert, but nanoscale gold is a useful catalyst. Relativistic effects are important since gold is heavy.

It's relatively rare that the quantum nature of atoms comes in. For example, for proteins, you use classical approximations. And in fact, you extrapolate parameters for the various different atoms.

For molecular dynamics, you want a time step of 10^-15 seconds. Protein folding takes milliseconds. The fastest folding protein that we know of takes a microsecond. That's a billion time steps! Until recently, we could do about 1/1000 of that. With Blue Gene/L, a few weeks of computer time might get you a microsecond: 10,000 processors, 10 particles per processor.

A recent simulation, of carbon nanotubes growing on iron, was done with forces computed on the fly from Hellmann-Feynman. (So the forces didn't have to be estimated.)

In high-end computing, the future is parallel. Clock speeds aren't getting faster. So we'll be getting multi-core. Blue Gene/P in 2008 will have 560 teraflop/s. Cray XT4 will be petaflop. 10 petaflop @ NCSA/Illinois in 2010. In 10 years, we may have exaflop machines! Petaflop and exaflop machines will have tens or hundreds of thousands of processors. They'll run slow for better reliability (less heat).

Tying it back to productive nanosystems: You will need computers; petaflop machines are around the corner; plan to use them on larger atomic systems; combine them with data repositories and experimental systems; think about multidisciplinary education (which may be corrosive to e.g. the idea of distinct physics departments).

Question: What's the current vogue for connections in multiprocessor machines? A: Current topology for interconnecting processors is a torus network. But with multi-cores (Blue Gene-P is a quad-core machine) communication will be fast among cores on a chip.

I didn't hear much relating to the Productive Nanosystems Roadmap in this talk.


Chris Phoenix

CRN Home Page
Tags:

Productive Nanosystems: Bio-Nano Approaches

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Fourth talk: Keith Firman, University of Portsmouth, UK. Title: Biological Molecular Motors for Nanodevices.

The interesting thing about biology is that it crosses both the micro and the nanoscale. He'll talk about chemical motors, overview types of biological molecular motors, give examples of nanodevices incorporating molecular motors, talk about single-molecule measurements, toxicity testing biosensor, and a proposed biosensor/nanoactuator.

(As a side note: biological motors are immersed in water, which will limit their power density and efficiency from fluid drag. Not all molecular motors are immersed in water, but many of them are.)

Chemical motors (non-biological) can generate a force of 200 pN per molecule, from a machine 2-3 nm in size. That's pretty impressive.

Even simple organisms, such as bacteria-targeting viruses (bacteriophages), include molecular motors. These are used to augment self-assembly. For example, the bacteriophage motor can corkscrew DNA into the virus against 10,000 atmospheres of pressure, using ATP for fuel.

Most bacteria have self-assembled flagellar motors: about 40 proteins (multiple copies).

Kinesin: walks along microtubules in cells, again powered by ATP, taking 200-300 steps per minute. If kinesin is fastened to a glass slide, it can make microtubules move.

ATP synthase includes a proton pump, which is connected to a component that synthesizes ATP. In an experiment, the ATP-synthesizing part (which is reversible, as it must be for efficiency--CP) was attached to a glass surface, a fiber (made of actin and fluorescently tagged) was attached to the drive shaft, and adding ATP made the fiber rotate. In another experiment, the proton-pump part was attached to a light-driven proton generator, and an array of these was used to transport a fiber for over 70 microns. So this is quite cool.

It's difficult to attach motors to a surface and have them still work. Translocases bind to DNA at a specific site and then pull it to make it move. Not just one step - many base pairs are pulled through the translocase, making the strand shorter. AFMs can be used to watch the translocase create a loop of DNA. But this is a slow process. Using a magnetic bead, they've measured 564 base pairs per second being pulled.

They're hoping to commercialize this type of motor. If the DNA can pull the bead toward a Hall-effect sensor, then they can detect addition of "fuel" (ATP) that makes the motor move. This could be used as a nanoscale valve, or a toxicity tester: detecting dioxin, which stops the motor from working. One dioxin molecule per 400 bases of DNA will stop the motor. It may also sense DNA-binding drugs. Each molecule can generate an individual signal (each molecule has its own sensor). And different molecules can have different sequences.

My summary: This is very cool work, but not much related to productive nanosystems. The motor they're looking at doesn't seem especially useful for molecular machine applications (though it seems great for sensors). I asked this question, and he said this could be used as a conveyor belt: DNA is a great templating tool to attach objects to. But not in the next 10 years. I'm still not sure how controllable this would be; he mentioned that the motor randomly lets go of the DNA.

Question: If the DNA is functionalized, can it be pulled through the motor? A: If there's a gap in the DNA (a region of just one strand) then it'll pull through. If there's a junction or branch, it'll stop.


Chris Phoenix

CRN Home Page
Tags:

Productive Nanosystems: Atomically Precise Manufacturing

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Third talk, John Randall, Zyvex: A completely different approach. Zyvex was founded to create atomically precise manufacturing on the way to productive nanosystems. In other words, building precise structures using big machines rather than nanoscale tools.

Assumptions:

  1. APM is valuable.
  2. Digital matter is "an advantage ripe to be exploited." (I've been saying this for a long time - it's a fundamental advantage of molecular manufacturing.)
  3. Self-assembly is powerful but limited.
  4. Brute-force top-down engineering is not always elegant but it works.

Goal: Produce 3D rigid covalent structures with top-down control direct from CAD (computerized blueprint). This is the result of improvements in ultra-precision manufacturing, but it'll take a change in mindset. (Current manufacturing still treats matter as jelly-like and infinitely divisible.)

They've found commercial applications for even very limited initial capabilities.

Putting atoms where you want them: Eigler's creation of the "IBM" logo made use of atoms dropping into minimum-energy positions. (This is a reference back to the digital theme.)

Wilson Ho did molecular pick and place, creating covalent bonds. (There have been a variety of scanning probe chemistry demonstrations.

Mechanosynthesis has issues: You have to pick up the part, verify you have it, transfer it, verify you've done that. They've looked at tool tip reactions; they think that existing tools are adequate to deposit dimers on diamond surface at room temperature. Although this is theoretically exciting, there are practical problems, including how to synthesize the tool tip. So they took a different approach...

Atomic layer deposition builds amorphous materials; atomic layer epitaxy (ALE) builds crystalline materials. Start with a protected (passivated) surface: every available bond has a hydrogen atom. If you deprotect the surface, removing the hydrogen, then you can deposit a layer of atoms. If you choose the right precursor gas, you add only one monolayer which is protected as it's added. Then you can deprotect and add exactly one more layer of atoms. There are a number of precursor gases available. There are literally hundreds of systems to grow things with atomic precision in one dimension.

Now, if you combine this with the ability to deprotect the surface in selected locations... With a scanning tunneling microscope, you can remove single hydrogen atoms with atomic precision. Several groups have demonstrated this. This is "the limit of a thin resist" - a monolayer of hydrogen.

If you do this layer by layer, you can build 3D structures. Prof. Joe Lyding at University of Illinois has done repeated desorption/deposition. He's probably created amorphous, not crystalline, but it does show patterning.

Differences from mechanosynthesis:

  1. Building blocks don't have to be captured by the tool tip.
  2. The tool tip can be used to inspect both deprotection and assembly.
  3. You can do large areas (fast) or atomic resolution, depending on mode.
  4. This is a very general technique.
  5. All you need is an atomic-resolution STM tip - don't need anything else with atomic resolution.

You can't make large, reentrant, or releasable structures. However, there are some useful products. They aren't interested in a laboratory demonstration; they want manufacturing.

You need an atomically precise, invariant tip. ALIS has built such a tip. A reproducible atomic structure at the end of a tungsten wire. There are several other possibilities. Note that the tip never has to touch the surface, so it should last quite a while without damage.

He wants a parallel array of SPMs for higher throughput. They think they can get sub-nanometer closed loop X-Y position control with integrated electronics, using CMOS MEMS processes.

They're trying to develop a dual-material process, silicon and germanium, so that you can make releasable structures. (They think they can deal with lattice mismatch.)

One possible product is a nano-imprint template. They expect atomically precise tools to be the most valuable product. They expect to enable productive nanosystem factories.

Question: Hydrogen migrates at normal temperatures. Is that compatible with the deposition technologies? A: We believe (after careful study) that the hydrogen is stable on a silicon surface, up to 200-300 degrees C. We think we can get epitaxy to work in that window. Cryogenic temperatures are not necessary. You do get motion on a single dimer, but no long-range motion.

Question (from Drexler): There's a big divide in molecular technologies is between processes where parts go together due to fit or reactivity, and those where the resulting pattern is due to mechanical control. Conceptually, your approach comes under mechanosynthesis. About error rate: If you have a mis-removal, can you put a hydrogen back where it should be? And how can you correct errors in silicon deposition? A: ALE balances errors: It relies on mobility of silicon on unpassivated surfaces. This may not work on small surfaces. We don't know what error rates will be in small areas. But at least we'll have a way to inspect. We don't have a generic way of removing silicon or putting down hydrogen. We may be able to deposit hydrogen in an area and then go back and clean it up.

Q: Have you looked at atomically precise *doped* structures? A: You'll hear Michelle Simmons talk about putting down phosphorous atoms exactly where she wants them. So yes, we can create structures with controlled doping. Again, the reaction is generic. We think there's a wide range of heterostructures you can make.


Chris Phoenix

CRN Home Page
Tags:

Productive Nanosystems: Abiotic Biomimetic Roadmap

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


Second talk Tuesday: Chris Schafmeister: got started in protein design, designed a protein--which took four years. He would like to make things like proteins and enzymes, but rather than building flexible chains that have to fold, he wants building blocks that couple through (rigid) pairs of bonds. Since they don't have to fold, they will be easier to design. The building blocks can be "decorated" with functional groups to make enzyme-like things.

Productive nanosystem definition: "A closed loop of nanoscale components that make nanoscale components."

Schafmeister has built 14 building blocks - some of them, they can make tens of grams at a time. They've built one with a functional group and they're working on other functional groups - some not found in natural amino acids.

They attach a building block to a plastic bead, then add other building blocks one at a time. This is not self-assembly: it is programmed assembly. They want to build molecules containing 20-50 blocks. That's a lot of reaction steps! Once they've built a chain, they double-link it, making it rigid. They've synthesized over 100 molecules; most are very water-soluble; the most building blocks so far is 18.

He's got an 8-page featured article in "Scientific American Reports: The Rise of Nanotech."

He wants to "create many artificial catalysts that approach the capabilities of enzymes." No one has made an enzyme yet - he wants to make thousands of them, engineered. He wants to make 60,000 enzymes as rapidly as he can write 60,000 lines of code. This may be achievable because enzymes carry out catalysis (accelerating chemical reactions) by changing the mechanism of the reaction. It does this via functional groups arrayed around the substrate. "If we can position multiple functional groups in three-dimensional space in all the right places," then we may be able to implement enzymes. So if functional groups (found in databases) were positioned in space correctly, you'd have the enzyme.

So, figure out where the functional groups should be, then use computer search to find the sequence of building blocks that holds the functional groups in the right position. He shows an example of his software working, searching for a sequence.

Proposes a "nanomachine synthesizer": 1) Chemical solution vat 2) Personal computer 3) Electrochemical interface. In biology, DNA is transcribed into messenger RNA, the sequence of bases which are read into the sequence of proteins. Trouble is, there's no place to plug in a computer. So replace the DNA with a computer...

He proposes a "synthesis train" - a sequence of carriers (built of his molecules) each of which carries one building block. So he'd build the synthesis train out of his molecular building blocks, and the train would then carry other building blocks to build other molecules. The carriers would be rigid, and when the chain was bent, it would bring the building blocks together and make them react. The building blocks would be put onto the train, and error-checked, by yet other catalytic molecules.

Electrode chips exist which act as redox controllers and sensors, driving chemistry with electricity. He wants to use similar electrodes to modify his synthesis trains. Each train has a header with a switchable state. So you start with a bunch of one-"car" trains (one "header" plus one car), then string the "cars" together onto a single header.

He wants to have a system that can take in very small feedstock molecules, build building blocks, then put them into chains under full computer control: massively parallel.

Once he's built 50-block chains, to put them together into larger structures, he wants to do it with covalent bonding rather than self-assembly. Stronger and potentially more reliable than self-assembly.

To design something like a biomedical robot: First, design each component and how they will fit. This is a huge job. Break it down into components and sub-components. Design the smallest sub-components with complementary surfaces... then design the catalysts that will combine the units... so you're building both structural chains and catalysts to join them. You probably couldn't build a car with this, but you could build things large enough to see and handle. Again, this is not self-assembly. He has some ideas for how to build things that are too big for chemical reactions. He skipped past some very interesting slides showing probe tips used to place molecules.

His summary: This builds on biology and organic synthesis experience. There are opportunities for error correction as needed. It's highly parallel and highly redundant. There's no runaway self-replication. You can improve it incrementally.

Question: How long do the chemical operations take? A: Seconds, maybe minutes. Not hours. Right now, we do one per hour (10^17 molecular copies).

Question: In enzymes, moving the active site even a few angstroms can break the enzyme. So you may have trouble positioning your active components precisely enough. A: I get this question a lot. The search space is enormous: 14^30 three-dimensional structures. The molecules are not completely rigid. The goal is to be off by less than a couple of angstroms. Also, the functional side chains will have free rotation; we'll either have to block that, or see if we can use it. In natural proteins, when you add a substrate, the protein folds up around it.

Q: Have you looked at branched structures? A: There aren't enough protecting groups in chemistry. (So you couldn't build out each chain separately.)

So... this sounds like a very aggressive and interesting way to build large molecule systems which can be designed to be functional.


Chris Phoenix

CRN Home Page
Tags:

Productive Nanosystems conference kickoff

Today and tomorrow, we're reporting on presentations at an important conference on Productive Nanosystems: Launching the Technology Roadmap. Chris Phoenix is providing live blog coverage for us...


I'm here at the Productive Nanosystems conference, to hear where some very smart and high-powered thinkers expect that atomically precise nanotechnology and nano-building-nano will go over the next few decades. The big question I have is: How much will the roadmap focus on nanoscale technologies that fall short of molecular manufacturing, and how much will it provide concrete endorsement and information about molecular manufacturing?

The first speaker is Alex Kawczak, VP, Nanotechnology & BioProducts, Battelle. Battelle is the manager or co-manager of seven national labs, and brings a lot of technical weight and gravitas to the Roadmap collaboration. Alex, starting off the conference, will show what the Roadmap is really about: more nanoscale tech, or something really innovative in the way of nano-building-nano.

He starts by talking about nano being a revolution... the roadmap is "a recommitment to atomic precision" as the guiding vision of nanotech. Guiding vision is to engage nanotech to improve the human condition. He mentions technical people who have contributed to the roadmap, Eric Drexler of course, Jeff Soreff at IBM, Damian Allis at Syracuse University, and also Stephanie Corchnoy at Synchrona.

Next a review of Battelle's history that I won't try to summarize.

A review of the goals of US, Korean, and Taiwanese nanotech initiatives. They all want to improve nanoscale tech with a focus on commercialization. US NNI has invested $6.5 billion over the past 5 years - most in basic research. "An opportunity exists for the U.S. to be a leader in the research and applied development of atomically precise technologies and atomically precise manufacturing (APM)." In other words, this is how the US can distinguish ourselves from the global crowd.

He cites Feynman: atomic precision, "maneuvering things atom by atom." There are several Atomically Precise things in the Roadmap: Manufacturing, Atomically Precise Productive Nanosystems (APPN), Atomically Precise Technologies. Now he's talking about the nanotech market as a whole ($1 trillion by 2015), most of which is not atomically precise. He says atomic precision can improve nanotech.

Atomically Precise Structures are a definite arrangement of atoms. Self-assembled DNA, engineered proteins, nanotube segments, etc. But atomically precise technology will increase scale and complexity.

Atomically Precise Manufacturing (APM) lets you build atomically precise structures under programmable control.

Atomically Precise Productive Nanosystems are functional nanosystems that implement APM. This is nano-building-nano - the high-impact stuff.

So this sounds like the roadmap defines a spectrum of AP technologies, working from self-assembly of engineered AP structures, up to nano building nano.

Two strategies in the roadmap: 1) Develop AP technologies for energy; 2) Develop AP technologies for medicine. Hm, no emphasis on productive nanosystems in that slide.

They're hoping that the Roadmap will help a broad range of industries to develop nano capabilities. They want to develop a broad technology base for APT, apply this to develop APM, APPNs, and spinoff APT applications. They want to "treat atomic precision as an essential criterion for research." So the roadmap encompasses self-assembly as well as APPN.

A few very dense slides of years-in-future. 10-25 years in the future, they want solid-building APPNs (not just polymer) with small-molecule inputs. 15-30 years, scalable APPN-array systems. Product: "Systems at the level of complexity of 2007 macroscale products." That's a pretty significant goal!

He re-states that the US is well positioned to lead this technology, and that "APM products will have Broad and Growing Applications that will lead to Productive Nanosystems of the future."

Question from audience: Does roadmap explicitly lead to macro-scale? Answer from Drexler: Roadmap takes today's technology forward, so it's a long road, but it does say a bit about that long-range objective.

Question: (Inaudible, something about funding): Answer: Alex: NNI has done a very good job of establishing nanotech centers within national labs, so we believe that e.g. energy initiative, a DOE program manager focused on APM that would work with e.g. DOE nat'l labs, to create the foundation for APM within established national labs, we said that's necessary. There's been a lot of solid research done, tremendous organization of capabilities, best in world, so we're well-positioned.

Question: Why focus on energy and healthcare? Alex from Energy [research] infrastructure is there, it's a matter of national security, we expect that APM will help energy goals arrive much faster. Also in health, we think there's groundwork that could benefit from APM. We were pragmatic. We looked for where that $6.5B could be leveraged for greatest societal benefit; also, these two areas are already receiving funding.

Question: Different mfg techniques for different applications? Answer (various people): Energy will mostly (except for catalysis) need high-volume manufacturing. The roadmap recommends hybrid manufacturing technology approaches at several points.

So it sounds like the Roadmap does talk, at least some, about molecular manufacturing, which they call APPN. This could be a very interesting conference. And it looks like the Roadmap does explicitly endorse molecular manufacturing.

Post-talk comment from Jim Von Ehr (today's moderator): Comparison to semiconductor roadmap: That was developed after they'd been going for a while. Our roadmap is developed in advance, so it's a bit speculative; you'll be amazed at how many different things were pulled together.


Chris Phoenix

CRN Home Page
Tags:

Roadmap Unveiling

Tomorrow is the start of a two-day SME conference in Arlington, VA, on Productive Nanosystems: Launching the Technology Roadmap...

For 20 years, researchers have explored the amazing promise of atomically-precise manufacturing. Now, for the first time, the Technology Roadmap for Productive Nanosystems will show the way forward, and the payoffs along the road, to this ultimate technological revolution.

Over the last two years, under Battelle's leadership, and hosted by four U.S. National Laboratories, researchers from academia, government, and industry have met to chart paths toward advanced, atomically-precise manufacturing. The resulting roadmap reveals crucial challenges and unexpected opportunities in the next steps forward.

Chris Phoenix will attend the conference and "live blog" his observations for us. It's all here starting Tuesday morning!

CRN Home Page
Tags:

The Limits of Vision

Not Necessarily Relevant Quote of the Week:

Everyone takes the limits of his own vision for the limits of the world.
— Arthur Schopenhauer

CRN Home Page

Tag:

Buzzwords for the Future

Here are a few...can you think of more?


Openness - Can mean several things, including net neutrality, source code access, transparency, and general willingness to consider new ideas. Attempts by corporate marketers and government spinmeisters to co-opt the meme likely will be common. Right now, there is a huge issue brewing in the reassignment of a big part of the radio spectrum. CRN considers these to be crucial issues that could either complicate or simplify the creation of an open molecular manufacturing infrastructure as we've advocated.


Trust - When you get an email, how do you know you can trust that it's safe to open? How do you know that it's from who it says it's from? How do you know that an offered product is legitimate and not a knockoff, or that an online "service" company is not fraudulent? Phishing is a huge problem, and it's almost certain to keep getting worse. So, how far away are we from the advent of 3D spam? Will we need a system of Molecular Rights Management? Look for startups in the future selling "trust" not as a slogan, but as a commodity, something like wuffies.


Sousveillance - Cameraphone
A word coined by Steve Mann to describe the process of bottom-up ubiquitous observation and recording as opposed to top-down surveillance; similar to Jamais Cascio's participatory panopticon, except a little easier to say. Imagine what might be done with cheap miniature networked devices -- several generations beyond today's camera phones -- in the hands of millions of people. No matter what it eventually is called, this will be among the most explosive issues of the next decade.


Mike Treder

CRN Home Page
Tags:

Stopping Climate Change (or not)

Recently we posted a somewhat controversial article about four stages of Climate Change Denialism. Our fourth level of denial was characterized as "Global warming is happening, and it is a result of human actions, and it will be catastrophic, but that's okay."

Jamais Cascio then offered this as an alternate:

Global warming is happening, and it is a result of human actions, and it will be catastrophic, but it's too late to do anything about it other than adapt.

In retrospect, I probably should have chosen a word other than 'denial', because the ultimate point is not about acceptance or rejection of climate change as a real phenomenon, but about our response to it. As Jamais suggests, it is possible to accept everything about global warming and still reject proposed solutions as either unworkable, ineffective, or both.

That's the point of view expressed in a new article from Foreign Policy magazine on "Why Climate Change Can't Be Stopped."

Environmental advocates have finally managed to put the issue of global warming at the top of the world’s agenda. But the scientific, economic, and political realities may mean that their efforts are too little, too late.

As the world’s leaders gather in New York this week to discuss climate change, you’re going to hear a lot of well-intentioned talk about how to stop global warming. From the United Nations, Bill Clinton, and even the Bush administration, you’ll hear about how certain mechanisms—cap-and-trade systems for greenhouse gas emissions, carbon taxes, and research and development plans for new energy technologies—can fit into some sort of global emissions reduction agreement to stop climate change. Many of these ideas will be innovative and necessary; some of them will be poorly thought out. But one thing binds them together: They all come much too late.

For understandable reasons, environmental advocates don’t like to concede this point. Eager to force deep cuts in greenhouse gas emissions, many of them hype the consequences of climate change—in some cases, well beyond what is supported by the facts—to build political support. Their expensive policy preferences are attractive if they are able to convince voters that if they make economic sacrifices for the environment, they have a reasonable chance of halting, or at least considerably slowing, climate change. But this case is becoming harder, if not impossible, to make.

To accept the argument that climate change cannot be stopped, you have to agree with three premises:

  1. A political solution isn't going to happen. Too many entrenched interests will hinder adoption of any meaningful steps.
  2. Even if the most far-reaching political solutions could be implemented, it still would not be enough to make a substantial difference.
  3. Not even the most radical proposals for a technological fix will be sufficient. It's simply too late, because the complex systems we've unleashed will prove to be overwhelming and intractable. Moreover, as Jamais Cascio has said, "we know nowhere near enough to make terraforming a plausible or safe option."

In the same article that I just referred to above, Jamais says, "Our best pathway to avoiding climate disaster remains the rapid reduction and elimination of anthropogenic greenhouse gases."

Note that he suggests two different actions: reduction and elimination of anthropogenic greenhouse gases.

The first step, substantially reducing greenhouse gas emissions, is mainly what the most aggressive political solutions from Premise 1 would accomplish. But if only the first step is followed, then we run up against Premise 2, namely that it's too little, too late.

So we'd have to also work toward eliminating greenhouse gases already in the air. That likely would require one or more of the radical technological solutions currently being advanced.

However, here's the rub: if you accept Premise 3, that won't work either. It's quite possible, some might say probable, that even a powerful new technology such as molecular manufacturing could not be employed effectively to blunt the catastophic impacts of melting ice caps, rising sea levels, and rapidly shifting weather patterns.

Are you with me so far? If so, it looks like we're screwed. Am I ready, then, to jump on the climate change can't be stopped bandwagon? I'm not, for several important reasons:

  • First, we don't really know what effect political solutions that focus on reduction of carbon emissions might have. Although it seems unlikely, it is possible that a rapid and comprehensive switch away from fossil fuels could make a big difference.
  • Second, as Brian Wang points out, even if such changes don't have an appreciable effect on climate change, they would have other big benefits, including reduction of deaths from pollution, from mining, and from war.
  • Third, we also don't know whether or not ideas like mirrors in space to deflect sunlight can be of help. Further research into any such proposals that seem feasible should be encouraged.
  • Fourth, we should not adopt any position that would further the aims of the industrial, commercial, and political forces (see monstrous hybrid) aligned against taking action. That can only make the problem worse.
  • Fifth, not often will an issue arise that galvanizes a global population to call for action with only long-term benefits. This is a time to push for huge changes that will have lasting effects. It's not a time to encourage complacency or resignation.
  • Sixth, since it appears that molecular manufacturing may be the only technology powerful enough to have a significant impact in ameliorating the effects of climate change, and since there are so many other great benefits to be gained through the development of molecular manufacturing, then we should strongly support research in that direction.

Mike Treder

CRN Home Page
Tags:

SUPPORT RESPONSIBLE NANOTECH


  • Even a small contribution will make a big difference!

  • Donategsmed

  • CRN is affiliated with World Care®, an international, nonprofit 501(c)(3) organization.

BLOGROLL