ramblings of an aimless mind

Just another Wordpress.com weblog

Posts Tagged ‘Science

Thinking big…while thinking small.

leave a comment »

Ever since people back home got whiff of the fact that I was doing a PhD in “Nanotechnology”, I am usually asked by eager friends and acquaintances about the latest and the greatest in the field. I always start out by saying that “nanotechnology” is in fact too generic a term for anyone to be an expert of,  and I know only a tiny little part of it, before launching into my view on all and sundry. After many a discussion and a talk to the local Rotary club, I thought maybe it would be good to post what I said as a starting point for a series on Nanotechnology. I often find that writing helps to crystallise my thoughts and understand things better, so this endeavour should be educational.

What is it? and why is it different?

A good starting point is to ask what nanotechnology is and what it encompasses. There does not seem to be a definition set in stone, but a commonly accepted (and in this case, heavily paraphrased) one is “any technology that derives its defining feature because some aspect of it is 100 nm or less in dimension“. So a bag of cement which derives some qualities (more strength, less weight etc) from the presence of 50 nm particles inside it is nanotechnology, while the Ipod Nano, however much Apple would like to claim so, is not.

Note: If you are unfamiliar with the nomenclature, a nanometer is a billionth of a meter. A human hair is roughly 50,000 nanometers. That should give a good idea of how small things are.

So what is different with nanotechnology? In the broad context, I would say nothing much is different. The same laws of physics apply as they do to micro, macro and mega-technology (if such words exist). What is different is the relative importance of each law when they act on different materials of different sizes. A good way to think about it is to conduct a thought experiment where we consider that every object is acted upon by gravity and some fictitious force that is pushing it away from the earth. Also assume that the fictitious force has a constant magnitude equal to that which gravity has on a 1kg mass. So how does this situation pan out for objects of different masses? For something human-sized (roughly 70-80Kg), gravity is about 70-80 times stronger and so dominant that the fictitious force will simply not be noticed. As we go to smaller objects, the relative strength of gravity decreases and that of the fictitious force increases. The tipping point is when the mass of the object reaches 1kg. Now both the forces exactly balance out and any object of 1kg will float in space. For objects smaller than 1kg, the fictitious force will be dominant and the the object will start shooting away from the surface of the earth, apparently by its own volition. The ability to float in empty space and shoot off against the pull of gravity may seem magical, but it is simply a change in the balance of forces that causes this apparent magic.

The balance of forces is a very simplistic example and nanotechnological phenomena involve more complicated things, but I find they almost invariably arise from this relative importance of different laws that arise due to small size or mass or specific structure. In my opinion, there are three broad categories that cause nanotechnological materials and devices to get their USP.

New materials.

One of the aspects of nanotechnology is the discovery of new materials that naturally exist at sizes small enough to merit the nanotech label. The classic examples of these are a few carbon based materials, such as carbon nanotubes (CNT), Graphene, Fullerenes etc. They arise out of the propensity of Carbon to form an immensely impressive array or structures (from coal to diamond for instance). CNTs and Graphene are being researched for applications as diverse as improving touchscreens, toughening materials, building electronics, improving battery energy storage and building space elevators.There are still issues with tractability though, and controlling these materials is not easily done.

New uses for known materials.

This is the domain of tweaking known materials such that we alter the balance of forces and tip it into doing what we would like it to.  A good example of such a case is for quantum dots, which are usually made by taking powders of known materials such as lead sulphide (PbS), Cadmium Selenide (CdSe) etc and blitzing them till they are turned into nanometre sized particles.This ultra-tiny size radically changes the properties of the particles vis-a-vis the orginal (mega-sized) powder and makes them more suitable for certain applications. This is a case of an existing material, simply being cast into a different form, so that it takes on different properties and potential applications.

Existing materials in innovation strutures.

Tweaking the balance of forces can also be accomplished by using known materials in innovative strutures rather that simply blitzing them as in the case of quantum dots. A classic example of this is the wing of a Morpho Rhetenor butterfly, a species originating from South America. The wings of this butterfly are shimmering and  brightly coloured, but surprisingly not due to the presence of a dye or any pigmentation. It is in fact due to an elaborate structure of the wing, which involves alternate layers of two different materials, each a few nanometres thick. The thickness of these layers is of a similar magnitude to the wavelengths of incident light and the resulting interaction produces the bright colours that make the butterfly so striking. An exploration into this structure is opening up a whole new area of research focussing on what is known as the “photonic bandgap” and which could have very interesting applications in the future.

So there we have it, my three tier classification of nanotechnology. This is just an introduction, so that the post is long enough to provide information while being short enough to maintain interest. I will examine more detail in further posts as I learn new stuff, which may be tomorrow, the day after, or never, depending on what I am upto otherwise. 🙂

Advertisements

Written by clueso

February 28, 2012 at 7:34 am

Posted in Uncategorized

Tagged with ,

Education unchained?

leave a comment »

Some time ago, I blogged about my idea of an education system that separated exams from learning and thereby allowed students to have more liberty in choosing how to get the classroom component of their education while earning their desired qualification. Today I learnt of MIT’s new fully automated course on circuits and electronics. MIT have run the open courseware project for a while now, but it was more of a reference point, where people could sample the lecture notes that MIT uses, but do not get credit for reading the notes or completing the exercises. This course however, offers a certificate for completion, which means that any person, anywhere in the world can now gain an MIT recognition of his/her skills from the comfort of their home.

Arguably, if this course gets a large enough market, someone may start a coaching class to help students understand the material. That would in essence be the separation of the classroom teaching component of education from the exam component, akin to what my old post suggested. Maybe those bright sparks at MIT were reading my blog, though I have my doubts about that.

A fully automated course is nevertheless something noteworthy. I am especially interested in how they handled the lab component. Do they purely use circuit simulators? do they plan to extend the idea in the future where there are accredited venues where students can go to complete the labs? Will the exams be purely multiple choice questions or have they devised a way to have computers grade exam papers? I have enrolled for it, so hopefully sometime in June (when the course ends), I will be able to proudly claim to have a certificate from MIT and also be able to report on my experiences.

While this new development has me excited about the direction education can take in this century,  I can think of a few undesirable implications of rolling out multiple courses or entire degrees through this avenue. Someday soon, I will put those thoughts down too.

Written by clueso

February 14, 2012 at 8:15 am

The science and politics of CO2

leave a comment »

Here is a nice article in the NY times about the origins of the research on atmospheric Carbon dioxide levels and where things are at the moment.

I found it massively interesting and informative. Hoping you will do too.

Written by clueso

December 23, 2010 at 1:47 pm

Posted in Uncategorized

Tagged with , ,

The LHC computer system.

leave a comment »

One of the most well publicised events in the scientific community in the recent past has been the opening of the Large Hadron Collider. The world reacted with great incredulity and amazement when they heard of the 27km tunnel built across the borders of two nations, the collaboration of 10000 scientists, the bunch of detectors all hoping to find the “Higgs boson” and of course, the idea that went around suggesting that the LHC would create a black hole that will swallow up the earth. The project cost around 4-6 billion euros, a figure which some people thought would have been better spent on solving more relevant issues. After all, knowing which particle the universe originated from is not so important as solving the global warming crisis right?

While the LHC was getting so much attention and the odd bit of controversy, there were huge technical developments in the computing systems which have received no mention at all, but are quite ground breaking in their own right. Not surprising, since the computers were not the focus of the project, but it is interesting to realise that the computing infrastructure would have created a new paradigm out of one of the “secondary” requirements of the project.

The LHC is expected to produce about 5-6 gigabytes of data every second, all of which has to be filtered, processed and stored. The computer infrastructure to accomplish this is a multi-tiered structure of different locations connected by fibre optic links. This distributed structure, along with the algorithms etc. formed the basis for what is now known as high throughput computing, which has a slightly different focus compared to high performance computing and could probably be used for some of the heavy number crunching applications that turn up in the future (climate modelling?). Details of the computing infrastructure can be found in the link, so I will not dwell upon it. What I want to point out is that this is not the first time CERN has innovated in the field of computer science. Way back in 1990, their computer science division tried out something called the HTTP protocol, designed to make it easier for researchers to share their results. The protocol went on to form the foundation of the world wide web and the rest, as is often said, is history.

This goes to underline some of the indirect benefits of pure research to the more “useful” technical fields. I initially used to feel that research should be all about something that has a tangible benefit to society and something esoteric like finding the Higgs boson or colliding neutrons is a waste of resources. I have now come to realise that while the main focus of  any high cost pure research project (low cost projects are easy to handle anyway) may not have an obvious tangible benefit, there will always be other practical difficulties which need to be solved and which may/will produce innovative solutions. Scientists working on these problems are also a funny bunch, probably the paragon of the breed that does things purely for the fun of it and not for monetary gain. Forcing these people to work on applied problems may be as horrifying to them as licking stamps for customers in the local post office and will probably make them give up their aspirations and settle down in a routine job and study their fields of interest as a hobby. It is probably better to allow them pursue their dreams, develop the technologies for the pursuit and then have another bunch of people more interested in the applied side of things to try and apply the offshoots of their work to other worldly issues. Unless the research organisation desires to make money from it, it does not even have to employ a large staff to do such work. Just having someone who is knowledgeable enough to document the results well will do the job.

In other words, it is important not to judge research only by the perceived utility of its focus. The evaluation should also include the secondary aims, something that ties in nicely with my earlier post on science being about the journey and not the destination

Written by clueso

November 14, 2008 at 9:04 am

Posted in Uncategorized

Tagged with

Liberating research.

with 2 comments

Here is a story of how Australia is at least thinking of making some of it’s publicly funded research fully available to anyone who wants to use it.

Being currently based in a University campus with a University subscription to the largest journals can spoil a person, because we get used to simply clicking on a link and getting to view the article. But it is surprising to encounter the number of hurdles which one has to go through to gain access to this information while outside university environs. A look at some of the prices makes me shake my head with incredulity, as I have seen journals charge something like $30 just to view a 3 page paper online.

The general mechanism of a paper getting published in a journal is that the author(s) send their manuscript to the journal editor, who then removes the names (and probably other identifiable marks) and sends it to some reviewers whose opinion he trusts. The reviewers go through the paper, try to punch holes in it and send it back with their comments to the editor. Depending on the comments, the editor will either accept the paper, or will send it back to the authors asking for corrections or will reject it. It is a fairly involved process, but a necessary one given that findings have to rigorously examined before they can be marked as trustworthy. It also means that in the days when all this happened with printed sheets of paper and snail mail, there was someone who had to take the effort of mailing the manuscripts back and forth, keeping tabs of who can review what etc. Obviously this is where the publishing companies stepped in and being the capitalistic setup that we are, they extensively tried to control the content so that they could then make their money by charging for subscriptions and for sale of individual articles. The great thing for the journals was that they practically gained ownership of the published article, despite the fact that the work was done by a researcher and paid for with public money. Even more hilarious was that the journals did not pay for these articles, they got them for free, but sold them for a price.

That was probably justifiable given the cost and the effort of sending manuscripts for review, printing journals and then again sending them to subscribers but given the presence of the internet, this model of a select few (publishers) having complete control over the results of publicly funded research sounds quite ludicrous and I guess some governments are waking up to the fact. The internet allows the whole operation to be done almost free of cost. A network of academics/industrial researchers could easily be built on a model like any of the social networking sites where everyone can list their areas of expertise. A soft copy of every new paper desiring publication can be put up on a forum, where anyone who is interested can read and make comments. The person doing the job of the editor can then consider the comments and make a decision on whether to allow the paper to be published or not. “Publishing” the paper will involve making a soft copy available for download and probably updating the RSS feed letting subscribers know that there is a new article up. The whole thing can be done completely digitally with no need of any kind of printing and mailing necessary and therefore can probably run by a few people from their garage using a storage service like Amazon’s web services. The printing is now done by the readers who like to read from paper, while others(like me) who prefer soft copies can read directly from the screen, Either way, it sure has the potential to save a hell lot of resources.

The hiccup to this has as usual been the paranoia of the group which would look their control-advantage from such a move, namely the publishers in question. There have been reports of a similar move being stymied by the publishing lobby in Britain and though I cannot substantiate the claim, the fear behind the publishing companies motives for doing so is quite understandable. No one would like to lose control of their cash cow, but for the betterment of society, I think such moves are necessary.

Written by clueso

October 2, 2008 at 11:37 pm

Posted in Uncategorized

Tagged with ,