Ok, folks. It is time to get back to some good, old-fashioned quantitative blogging. In her recent diamond blog, Jennifer mentioned Life Gem, which claims to sell diamonds made from the carbon extracted from a deceased loved one. A press release about this process floated through our work a few years ago. At the time, their technique was to capture exhaust fumes from cremation, extract the CO2, reduce it, and make a diamond. A colleague of mine and I wondered about this. How much of that carbon would be from the body, and how much would be from the fuel used to combust it? We both suspected that such diamonds would contain mostly carbon from the fuel used to burn the body, with only a bit of loved one mixed in.
But why speculate when we can calculate? Let’s do the math, and then discuss how to test the prediction.
Assume a 70 kg body that is 20% carbon (various online sources give values between 18 and 23% carbon for a person).
70kg total x 0.2 C/total = 14kg C
How much fuel does it take to combust such a body? This website says 20 liters, but they don’t specify what the fuel type is. Let us assume it is fuel oil with a density of .85kg/l, and a carbon content of 85%.
20 l x .85kg/l = 17kg total x .85 C/total = 14.5 kg C
So if we are burning the body using fuel oil, the exhaust carbon will be about half body and half fuel.
Natural gas should give less carbon exhaust. According to Wikipedia, the energy density of diesel oil is about 46 MJ/kg, suggesting that 782 MJ are needed for cremation. Using natural gas, with an energy density of 54 MJ/kg, we need only 14.5 kg of gas. Furthermore, natural gas is only 75% carbon, so the total carbon mass from natural gas fuel is about 10.9 kg.
Of course, calculations are all well and good, but how can we test it? In the case of fuel oil, testing is difficult. However, natural gas tends to have a very light carbon isotopic composition, with a 13C/12C ratio 4-6 percent lighter than PDB (an arbitrary, but universal carbon isotopic standard). In contrast, a body will have a C isotopic content somewhere between 2.1 and 2.8 percent lighter, depending on how careful the person was about watching what they ate.
So if you really want to know how much of your diamond derives from your deceased beloved, and how much is burned natural gas (itself the remnant of long dead organisms), you can measure the carbon isotopic composition of the diamond. There’s just one problem.
Measuring carbon isotopes requires a destructive analysis. So some or all of the gemstone must be consumed. Otherwise, you’ll never really know which body is in the diamond.
I'm a geochemist. My main interest is in-situ mass spectrometry, but I have a soft spot in my heart for thermodynamics, poetry, drillers, trees, bicycles, and cosmochemistry.
Saturday, March 31, 2007
Friday, March 30, 2007
Ten young Americans
Note: This is a purely political post. There is no scientific content.
The following ten links refer to news stories about ten young Americans. They are from all over the country, and have quite different backgrounds, but the one thing that they have in common is that they all made the news, often in heartbreaking fashion.
Ming Sun, 20
Milton Gist, 27
Jennifer J. Harris, 28
Hector Leija, 27
Darrell Wayne Shipp, 25
Luis Rodriguez-Contrera, 22
Ashly L. Moyer, 21
Nimo Westhill Masaniai Tauala, 29
Curtis E. Glawson Jr., 24
Barbara Bush, 25
The following ten links refer to news stories about ten young Americans. They are from all over the country, and have quite different backgrounds, but the one thing that they have in common is that they all made the news, often in heartbreaking fashion.
Ming Sun, 20
Milton Gist, 27
Jennifer J. Harris, 28
Hector Leija, 27
Darrell Wayne Shipp, 25
Luis Rodriguez-Contrera, 22
Ashly L. Moyer, 21
Nimo Westhill Masaniai Tauala, 29
Curtis E. Glawson Jr., 24
Barbara Bush, 25
Thursday, March 29, 2007
Environmentalists: shilling for Krispy Kreme?
Some typical environmental leader- possibly from the WWF- was on the radio the other day, warning that biofuels are problematic, and suggesting that we might actually all be better off freezing in the dark.
His chief complaint was that biofuels might increase the price of foodstuffs, thereby making life hard for the poor. This is an interesting proposition, one that invites closer analysis. For example, what foodstuffs would most likely be affected?
The obvious answer is that agricultural commodities used for fuel production would be subject to an increase in demand, resulting in an increase in price. If those commodities are commonly consumed by those with limited disposable income, then such people would be forced to switch their dietary habits, or go without.
So, what are the foods most likely to be used for fuel production? At present, the two main biofuels are ethanol and bio-diesel. Biodiesel is made from vegetable oils, while ethanol is mostly made from sugar, with some production coming from feed corn.
Thus, in a biofuel dependent economy, we would expect sugar and vegetable oil (nd to a lesser extent, red meat) to be more expensive, and poor people would have to cut back on these items.
Such an event would be a cultural disaster. The liberal worldview requires a repressed underclass which is forced to poison itself with junkfood by an uncaring economy. Imagine a beleaguered working class breadwinner eschewing his donuts for oatmeal. Imagine the hardship caused by drinking unsweetened tea instead of soda. People might actually start improving their lives instead of wallowing in victimization.
In the developed world today, obesity and diabetes are especially rampant among low income people. Increasing the price of the food items which cause these diseases would disrupt this demographic. It is even theoretically possible that it could reduce the incidence of these debilitating diseases, freeing the victims from a lifetime of suffering and dependence. And nothing threatens liberals more than the possibility of poor folks being less dependent.
So it is no wonder that the greenies are supporting the donut industry. Biofuels could make junk food too expensive for poor people to poison themselves with it. That would be a market triumph. Something that liberals must avoid at all costs.
His chief complaint was that biofuels might increase the price of foodstuffs, thereby making life hard for the poor. This is an interesting proposition, one that invites closer analysis. For example, what foodstuffs would most likely be affected?
The obvious answer is that agricultural commodities used for fuel production would be subject to an increase in demand, resulting in an increase in price. If those commodities are commonly consumed by those with limited disposable income, then such people would be forced to switch their dietary habits, or go without.
So, what are the foods most likely to be used for fuel production? At present, the two main biofuels are ethanol and bio-diesel. Biodiesel is made from vegetable oils, while ethanol is mostly made from sugar, with some production coming from feed corn.
Thus, in a biofuel dependent economy, we would expect sugar and vegetable oil (nd to a lesser extent, red meat) to be more expensive, and poor people would have to cut back on these items.
Such an event would be a cultural disaster. The liberal worldview requires a repressed underclass which is forced to poison itself with junkfood by an uncaring economy. Imagine a beleaguered working class breadwinner eschewing his donuts for oatmeal. Imagine the hardship caused by drinking unsweetened tea instead of soda. People might actually start improving their lives instead of wallowing in victimization.
In the developed world today, obesity and diabetes are especially rampant among low income people. Increasing the price of the food items which cause these diseases would disrupt this demographic. It is even theoretically possible that it could reduce the incidence of these debilitating diseases, freeing the victims from a lifetime of suffering and dependence. And nothing threatens liberals more than the possibility of poor folks being less dependent.
So it is no wonder that the greenies are supporting the donut industry. Biofuels could make junk food too expensive for poor people to poison themselves with it. That would be a market triumph. Something that liberals must avoid at all costs.
Facelift
I've tried freshening up the blog by replacing the stock photo with a picture of some 3.3 billion year old zircons from my PhD research. The holes are from laser ICP analyses. The zircons are from a sandstone unit in the Jacobina group, which is a paleoproterozoic cover sequence on the Sao Francisco craton, in Brazil. What I can't figure out is what color to use for the font, in order for it to be readable without being ugly.
Wednesday, March 28, 2007
Mantle melting
Much of the island of Tasmania is covered in basalt, which sometimes shows columnar jointing
There are a few processes in geology that are so fundamental and thoroughly taught that we forget that they are counterintuitive to normal people. One of these is mantle melting. The Earth consists of compositional layers of increasing density from surface to center. Near the surface is the crust, which is about 5-70 km thick. The crust has a complex composition I don’t want to get into now, but it is mostly silicates- complex metal oxides that include at least some silicon. Below that is the mantle. The mantle is mostly magnesium silicates, with some iron substituting for magnesium, and a few less common calcium and aluminum bearing minerals. The core is metallic iron, and is mostly molten. This, in itself, is fairly intuitive.
When the mantle partially melts, the resulting magma is less dense, and rises either to the base of the crust, or through the crust to the surface. For reasons I’ll skip for now, the composition of this partial melt is different to the unmelted material left behind. This mantle-derived magma* is known as basalt, and a lot of the Earth’s crust is made of this material. You can see basalt above, in the post before my last one, in volcanoes like Hawaii. Mot of the ocean floor is covered in basalt. So, when the mantle partially melts, we get a molten rock called basalt. And that is not particularly unintuitive either.
The tricky bit is this: Unlike most of the melting we see in our daily lives, the melting of the Earth’s mantle is never caused by heating it up. Most of the mantle is solid rock, and it melts fairly regularly, but most of the processes that melt it actually cool it, instead of melting it.
We geologists are so used to this that we don’t bat an eye, but for normal people, melting without heating seems a tad unusual. But then, the mantle is quite a different place than the kitchen counter.
The mantle is very hot, fairly dry, and under extremely high pressure. This pressure ranges from a few thousand times atmospheric pressure at the top of the mantle to about 1.3 million times atmospheric at the bottom. And one of the effects of increased pressure is that it also increases the melting point.
In fact, most of the solid mantle is so hot that, if you suddenly released the pressure around it, it would spontaneously melt. It is only the pressure that keeps it solid. The mantle can also flow- at high temperatures and pressures, solids become slightly ductile, and over long periods of time, the mantle can ooze around at speed of a few centimeters each year.
When a very deep, very hot part of the mantle rises close to the surface, if it rises faster than it can cool, it will generally start to melt once the pressure drops to around 15-25 thousand atmospheres. This is called decompression melting, and is the main cause of basalt magmatism in mid-ocean ridges and hotspots.
In some cases, mantle rises so slowly that it cools faster than it rises. When this happens under a spreading center, then you get ocean floor made of mantle, not basalt, because no melt was produced. This is rare, but there are known occurrences, mostly at very slow spreading ridges, such as the Arctic Ocean or the ridge between Africa and Antarctica. An expedition to a newly discovered crustless region was recently summarized by the rockbandit here.
The second main cause of melting is caused by water. When ocean floor sinks down into the mantle, it can carry water with it, which will eventually escape into surrounding mantle. Wet mantle has a lower melting temperature than dry mantle, so the introduction of water into warm dry mantle triggers melting. This is what produces arc volcanism above subduction zones in places like Japan or Chile, although these wet magmas sometimes interact with the overlying crust in ways that changes their composition so that they are no longer basalt.
They key point is that unlike the melting butter or ice at home, melting in the mantle is not caused by the addition of heat. It is caused by lowering the melting temperature of material that is already hot.
*By far the most common one. There are some rare, volumetrically insignificant mantle melts that are not basalts, but we can ignore them for now.
Sexual Sin and Christians (repost)
I originally wrote this for a usenet group as an undergrad back in 1993. This is the edited, cleaned up version. The original can probably be found floating around cyberspace somewhere, but it has more typos, so I prefer this version:
It is a well known phenomenon that Christians, particularly dogmatic, fundamentalist Christians, have a disturbing tendency towards homophobia. There have been many suggestions as to why this is, including direct quotes from scripture, statements by powerful theological figures, and other such religious ideas, but in order for a non-Christian to really understand the nature of this deep-seated prejudice, a non-religious model must be constructed to show exactly why it is that traditional Christian beliefs often seem to conflict with homosexual activity. The following model, which is principally based on traditional Newtonian physics, should do much to explain this phenomenon. Unfortunately, while it may bring understanding to the agnostic physicist or computer scientist, its physical nature will probably enlighten the godless historian or other liberal artist no more than the traditional explanations. However, since such folk are usually excellent at coming up with dubious explanations of variable credibility (abbreviated by biologists as B.S.), we are confident that they can create their own creative explanations, and thus have no use for this one.
We will consider God is a point mass, centered at the origin of our xyz space. Christ, we will assume, is at the right hand of God, or about 100 centimeters away. His mass is probably around 75 kilograms. Since God has a very large mass (a bit less than infinity), Christ, who we will assume is in a circular orbit around God, has a very large momentum, and hence has a very small wavelength. This means that Christ's uncertainty is quite small, so we can therefore conclude that He is fairly certain in all that He does. Now let us consider a sinner. We shall place him at a large distance from God, say one inch and 45 million light-years. He, also being in a circular orbit, will be traveling significantly slower than Christ, and will therefore be more uncertain about it. One should also consider, however, that since Christ's orbit could fit in a kiddie pool, while the sinner's would encompass not only our galaxy, but a few of the nearby ones as well, that the sinner gets around more, sees more, and is generally a more knowledgeable guy than the Savior. This fits in with traditional wisdom. From this situation we can draw a few conclusions. The first is that Mary, the mother of Christ, being a fairly pure person is close to God. This means that she must be a fast woman. The second conclusion that can be drawn is that sinners have a lot more potential than saints, since less of their energy is stored as kinetic energy. Further insights can be gained when we look at the situation of the heathen.
A heathen is someone who is not affected by God. This means that they are at least a infinite distance from Him. Now, assuming that one of these folk starts to travel towards God, he will convert his potential energy to kinetic energy during the approach, or descent. Since he started out an infinite distance away, but with some kinetic energy of his own, he will approach God on a hyperbolic trajectory and then disappear into space again, never to be seen again. If his approach is such that it brings him inside the orbit of the Son of God, then right after his closest approach, the sinner's velocity will be greater than Jesus', which means that he will be more sure of himself in his escape than Christ is in orbit. This is an interesting notion, but some of the side ramifications are even more intriguing.
Without any orbiters, therefore, God would not be able to attract anyone - all approaching bodies would have either parabolic or hyperbolic trajectories. However, once God has an orbiter, the two of them could collaborate to capture other bodies. This means that heathens that get too close to believers in their approaches might get trapped, and by the same token, believers who are buzzed by heathens could be ejected. And what, the reader asks at this point, does any of this have to do with sex? It is after all, that, and not Newtonian physics that gets Christians so agitated. Well, the answer is this: Sex, as we all know, is the union of two or more people. This, in our analogy, would be represented as a collision. Now, in Christianity, almost all of the holy figures are male. For God, a collision between any of these close-in folk would be disastrous, because, even if we assume they are indestructible, such a high energy collision would
eject one of the men in it,
cause one of them to fall into God, or
give them highly irregular elliptical orbits.
All of these would be bad for God, because in the first two He would lose orbiters, making His chance at capturing new ones less, and in the third case He would have a much greater chance of more collisions, as the elliptical orbiters would cross many of the unaffected circular orbits. Therefore, God probably disapproves of these collisions.
Unfortunately, this theory is far from robust. It does not, for example, contain a method for experimentation whereby one can determine its validity. It also assumes that religious figures are sufficiently slow that they do not obtain relativistic speeds. Considering the large mass of God, this seems improbable. In fact, if God is as large as we suggest, the orbit of Christ would probably lie inside of His Schwartzchild radius. This would make figuring out what those two are doing very difficult, since none of the rest of us in the outside universe would be able to see beyond that limit, but because the bond between Them would be incredibly powerful, the evidence all points towards something that the Bible is not in favor of. On the other hand, it is the opinion of this author that whatever one does inside of one's personal black hole is one's own business, and therefore, I shall turn my attention to other matters.
It is a well known phenomenon that Christians, particularly dogmatic, fundamentalist Christians, have a disturbing tendency towards homophobia. There have been many suggestions as to why this is, including direct quotes from scripture, statements by powerful theological figures, and other such religious ideas, but in order for a non-Christian to really understand the nature of this deep-seated prejudice, a non-religious model must be constructed to show exactly why it is that traditional Christian beliefs often seem to conflict with homosexual activity. The following model, which is principally based on traditional Newtonian physics, should do much to explain this phenomenon. Unfortunately, while it may bring understanding to the agnostic physicist or computer scientist, its physical nature will probably enlighten the godless historian or other liberal artist no more than the traditional explanations. However, since such folk are usually excellent at coming up with dubious explanations of variable credibility (abbreviated by biologists as B.S.), we are confident that they can create their own creative explanations, and thus have no use for this one.
We will consider God is a point mass, centered at the origin of our xyz space. Christ, we will assume, is at the right hand of God, or about 100 centimeters away. His mass is probably around 75 kilograms. Since God has a very large mass (a bit less than infinity), Christ, who we will assume is in a circular orbit around God, has a very large momentum, and hence has a very small wavelength. This means that Christ's uncertainty is quite small, so we can therefore conclude that He is fairly certain in all that He does. Now let us consider a sinner. We shall place him at a large distance from God, say one inch and 45 million light-years. He, also being in a circular orbit, will be traveling significantly slower than Christ, and will therefore be more uncertain about it. One should also consider, however, that since Christ's orbit could fit in a kiddie pool, while the sinner's would encompass not only our galaxy, but a few of the nearby ones as well, that the sinner gets around more, sees more, and is generally a more knowledgeable guy than the Savior. This fits in with traditional wisdom. From this situation we can draw a few conclusions. The first is that Mary, the mother of Christ, being a fairly pure person is close to God. This means that she must be a fast woman. The second conclusion that can be drawn is that sinners have a lot more potential than saints, since less of their energy is stored as kinetic energy. Further insights can be gained when we look at the situation of the heathen.
A heathen is someone who is not affected by God. This means that they are at least a infinite distance from Him. Now, assuming that one of these folk starts to travel towards God, he will convert his potential energy to kinetic energy during the approach, or descent. Since he started out an infinite distance away, but with some kinetic energy of his own, he will approach God on a hyperbolic trajectory and then disappear into space again, never to be seen again. If his approach is such that it brings him inside the orbit of the Son of God, then right after his closest approach, the sinner's velocity will be greater than Jesus', which means that he will be more sure of himself in his escape than Christ is in orbit. This is an interesting notion, but some of the side ramifications are even more intriguing.
Without any orbiters, therefore, God would not be able to attract anyone - all approaching bodies would have either parabolic or hyperbolic trajectories. However, once God has an orbiter, the two of them could collaborate to capture other bodies. This means that heathens that get too close to believers in their approaches might get trapped, and by the same token, believers who are buzzed by heathens could be ejected. And what, the reader asks at this point, does any of this have to do with sex? It is after all, that, and not Newtonian physics that gets Christians so agitated. Well, the answer is this: Sex, as we all know, is the union of two or more people. This, in our analogy, would be represented as a collision. Now, in Christianity, almost all of the holy figures are male. For God, a collision between any of these close-in folk would be disastrous, because, even if we assume they are indestructible, such a high energy collision would
eject one of the men in it,
cause one of them to fall into God, or
give them highly irregular elliptical orbits.
All of these would be bad for God, because in the first two He would lose orbiters, making His chance at capturing new ones less, and in the third case He would have a much greater chance of more collisions, as the elliptical orbiters would cross many of the unaffected circular orbits. Therefore, God probably disapproves of these collisions.
Unfortunately, this theory is far from robust. It does not, for example, contain a method for experimentation whereby one can determine its validity. It also assumes that religious figures are sufficiently slow that they do not obtain relativistic speeds. Considering the large mass of God, this seems improbable. In fact, if God is as large as we suggest, the orbit of Christ would probably lie inside of His Schwartzchild radius. This would make figuring out what those two are doing very difficult, since none of the rest of us in the outside universe would be able to see beyond that limit, but because the bond between Them would be incredibly powerful, the evidence all points towards something that the Bible is not in favor of. On the other hand, it is the opinion of this author that whatever one does inside of one's personal black hole is one's own business, and therefore, I shall turn my attention to other matters.
Friday, March 23, 2007
Scooping the thrust sheet
A while back, Highly Allochthonous promised us some juicy field photos of South Africa. Since he has yet to produce the goods, I thought I'd whet y'all's appetite with this picture of the Drakensberg. The Drakensberg mountains are the 2 km wall formed by the eastern edge of the Karoo flood basalts. That's them to the left, and some of the underlying sandstone outcrops in the valleys below.
Thursday, March 22, 2007
OHS and pregnancy
Sciencewoman recently blogged about reasons women leave science, and one of her commenters brought up the issue of pregnancy and laboratory safety requirements. The way I see it, there are three basic approaches to this issue. I will lay them out as dispassionately and factually as scientifically possible.
1. The Victorian chauvinist approach. This approach assumes that womenfolk are vital to the health of the country as bearers of young men who we desperately need to send into the trenches against the Germans. As such, it is vital to protect the childbearing resource at all costs, and any pesky activity like earning a living could endanger the ability of society to breed a new generation of pig-headed, antediluvian assholes.
2. The lawyerphillic approach. This approach assumes that any risk, however minute, must be avoided in order to protect the university gold. Instead of resources to protect, pregnant women are liabilities to minimize. Other than that distinction, this approach is identical to the Victorian chauvinist approach.
3. The sensible scientific approach. Under this system, pregnant research staff are informed of potential risks, they have those risks compared to more familiar, out-of-lab dangers in order to make them comprehensible. The lab then makes arrangements so that if the pregnant researcher chooses not to take a risk, it does not impact on her work. She then makes an informed choice about how to proceed.
Here’s an example of how the third way works, taken from a long time ago in a university far far away…
Dr. XX, a pregnant post-doc, wanted to know if exposure to her 233U spike would constitute a radiological hazard risk to her unborn child. The lab supervisor, Dr. XY, showed Dr. XX the math to determine what the decay rate was. He then pulled out a Geiger counter, put it next to her spike (tick tick tick….) to demonstrate. In order to compare this with the radiological risk from real-life items, he them put the counter next to a cement wall (tictictictictic…) Dr. XY then reminded Dr. XX what the biological effects of ionizing radiation were, and told her that if she chose not to spike her own samples, he or someone else would be happy to do it for her. Dr. XX then made her informed decision.
Note to Dr. XY wannabes. If you are looking for a slightly radioactive everyday item to compare a low level radiohazard to, DO NOT USE the woman’s bump! While it may seem perfectly logical to point out, “Look, your baby’s already way more radioactive than your sample,” in practice this approach is asking for trouble. So unless you want your Geiger counter forcibly inserted into your low photon environment, find something else.
1. The Victorian chauvinist approach. This approach assumes that womenfolk are vital to the health of the country as bearers of young men who we desperately need to send into the trenches against the Germans. As such, it is vital to protect the childbearing resource at all costs, and any pesky activity like earning a living could endanger the ability of society to breed a new generation of pig-headed, antediluvian assholes.
2. The lawyerphillic approach. This approach assumes that any risk, however minute, must be avoided in order to protect the university gold. Instead of resources to protect, pregnant women are liabilities to minimize. Other than that distinction, this approach is identical to the Victorian chauvinist approach.
3. The sensible scientific approach. Under this system, pregnant research staff are informed of potential risks, they have those risks compared to more familiar, out-of-lab dangers in order to make them comprehensible. The lab then makes arrangements so that if the pregnant researcher chooses not to take a risk, it does not impact on her work. She then makes an informed choice about how to proceed.
Here’s an example of how the third way works, taken from a long time ago in a university far far away…
Dr. XX, a pregnant post-doc, wanted to know if exposure to her 233U spike would constitute a radiological hazard risk to her unborn child. The lab supervisor, Dr. XY, showed Dr. XX the math to determine what the decay rate was. He then pulled out a Geiger counter, put it next to her spike (tick tick tick….) to demonstrate. In order to compare this with the radiological risk from real-life items, he them put the counter next to a cement wall (tictictictictic…) Dr. XY then reminded Dr. XX what the biological effects of ionizing radiation were, and told her that if she chose not to spike her own samples, he or someone else would be happy to do it for her. Dr. XX then made her informed decision.
Note to Dr. XY wannabes. If you are looking for a slightly radioactive everyday item to compare a low level radiohazard to, DO NOT USE the woman’s bump! While it may seem perfectly logical to point out, “Look, your baby’s already way more radioactive than your sample,” in practice this approach is asking for trouble. So unless you want your Geiger counter forcibly inserted into your low photon environment, find something else.
Wednesday, March 21, 2007
Isotope dilution
Sciencewoman has an interesting article enumerating the barriers to people in general, and women in particular, who wish to be professional researchers. There has been some interesting discussion on this issue. I would like to contribute. Unfortunately, before I can do so, I need to explain isotope dilution.
Isotope dilution is a way of getting very accurate concentration numbers from mass spectrometers. One of the problems with mass spectrometry is that you generally don’t know what proportion of the total sample you introduce actually gets to the detector. This depends on a number of different things, including the ionization efficiency and transmission efficiency of the instrument. In general, the relative efficiencies of different elements are not equal and not constant. As a result, tricks must be employed to account for this.
Isotope dilution is one of these tricks. In isotope dilution, you add a known amount of a single isotope of the element you wish to measure to the sample. The isotope you add is called the spike. The action of adding it is called spiking. Since you know how much spike you added (this is measured very carefully), the ratio of spike sampled to spike detected gives the detection efficiency of the instrument, and the ratio of spike to natural isotope gives the concentration of that isotope.
Spikes are often, but no always, short-lived radioactive isotopes not found in nature. For example, 233U is commonly used to spike U solutions for measuring U concentrations. Because separating and/or creating spikes via nucleosynthesis is difficult, they are often expensive, and the radioactive ones can be a potential radiation hazard.
Obviously there are all sorts of further improvements on the process- double spiking to determine mass bias, changing dilutions to determine detector linearity, etc. But the basic idea is the same. You add a known amount of something unique to account for the screwy processes that occur inside the machine.
Isotope dilution is a way of getting very accurate concentration numbers from mass spectrometers. One of the problems with mass spectrometry is that you generally don’t know what proportion of the total sample you introduce actually gets to the detector. This depends on a number of different things, including the ionization efficiency and transmission efficiency of the instrument. In general, the relative efficiencies of different elements are not equal and not constant. As a result, tricks must be employed to account for this.
Isotope dilution is one of these tricks. In isotope dilution, you add a known amount of a single isotope of the element you wish to measure to the sample. The isotope you add is called the spike. The action of adding it is called spiking. Since you know how much spike you added (this is measured very carefully), the ratio of spike sampled to spike detected gives the detection efficiency of the instrument, and the ratio of spike to natural isotope gives the concentration of that isotope.
Spikes are often, but no always, short-lived radioactive isotopes not found in nature. For example, 233U is commonly used to spike U solutions for measuring U concentrations. Because separating and/or creating spikes via nucleosynthesis is difficult, they are often expensive, and the radioactive ones can be a potential radiation hazard.
Obviously there are all sorts of further improvements on the process- double spiking to determine mass bias, changing dilutions to determine detector linearity, etc. But the basic idea is the same. You add a known amount of something unique to account for the screwy processes that occur inside the machine.
Monday, March 19, 2007
A dark day for science blogging
"I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror and were suddenly silenced."
Frink tank has gone dark, no doubt frozen in carbonite by the evil minions of taste and respectability.
The blogosphere has lost its most sophisticated, precise voice.
I will try to lower the level of discourse here at the lounge, as a sign of respect. It is the least I can do.
Frink tank has gone dark, no doubt frozen in carbonite by the evil minions of taste and respectability.
The blogosphere has lost its most sophisticated, precise voice.
I will try to lower the level of discourse here at the lounge, as a sign of respect. It is the least I can do.
Friday, March 16, 2007
Gender representation of my blogroll
My blogroll got mangled when I upgraded to the new format, so I’ve been putting off fixing it. One method of procrastination was to break down the links that haven’t disappeared, in order to look for inadvertent bias. It appears that I have 11 men, 10 women, and one robot linked. Interestingly, the two “et al.” blogs on there are written by either all women (inkycircus) or all men (realclimate). I have no idea what the gender breakdown of the blogosphere is, or whether it is desirable or even sensible to have a target. I thought I might just float this little observation out there without any means to evaluate it.
Electron probes vs ion probes
CJ asked me to explain what this so-called ultraprobe is all about, so here’s a brief rundown on the difference between electron probes and ion probes, and why it might be useful. The ultraprobe is an electron probe.
An electron probe bombards a sample with electrons with energies in the low tens of KeV. At these energies, these electrons can dislodge inner shell electrons in the material being analyzed. When an outer shell electron then decays to fill that vacancy, an Xray is produced, and the energy of that X-ray is characteristic to each element. So by attaching a bunch of X-ray spectrometers to an electron gun, you can determine the elemental composition of the target by analyzing the X-rays produced by electron bombardment. X-ray diffractometers basically use crystals with known lattice parameters to diffract the X-rays according to the X-ray wavelength.
Since X-rays are photons, the wavelength and energy are related by the equation e=k/lambda, where e is energy, lambda is wavelength*, and k is a constant, 1243 eV/nm. This constant can be derived by multiplying the speed of light (in nm) by planks constant (an obscure, extremely small physics number that relates fundamental physics properties to each other), and dividing by the number of joules per eV (not many).
Electron probes are great for determining the major and minor elemental composition of minerals, but they give no information on isotopic composition, since isotopes all have the same electron configurations. Since the only things being moved are electrons, this technique is non-destructive, unless you turn the power up too high and melt your sample by overheating it.
Ion probes are Secondary Ion Mass Spectrometers. They bombard a sample with ions, and those ions then ionize the target. The ions from the target are then accelerated into the mass spectrometer, sorted by mass. The ionization efficiency of ions is different for different elements, but the same for isotopes of the same element. Since the ions are sorted by mass, isotopes can be separated and their rations are easy to determine. Elemental ratios can also be determined if a standardization method to determine relative ionization efficiencies is used.
U/Th/Pb geochronology determines radiometric ages by using one or more of the following decay chains: 232Th -> 208Pb, 235U -> 207Pb, and 238U -> 206Pb. Comparing the results of two or more of these chains, allows a geochronologist to determine whether or not the geologic material being dated has lost or gained U, Th, or Pb since crystallization, since elemental loss or gain will disturb one or more of these systems. We can also measure 204Pb, which is not a decay product, to estimate how much initial Pb was in the material. Thermochronic may have explained this in more detail.
Because the electron probe cannot measure isotopes, it cannot determine whether or not a sample has lost or gained U, Th, or Pb, and it cannot identify the presence of common Pb. Thus electron probe dating has to assume closed system behavior and low common Pb. It is also limited to minerals with a fairly high U, Th, and Pb content, since the detection limits on the electron probe are generally in the tens of ppm. On the other hand, they are easier to use, more common, and non-destructive. And if you choose your geologic question such that Pb loss and common Pb are unlikely to occur, and high precision is not necessary, you can get useful numbers quickly and easily.
The article mentions a potential snowball earth application, so my guess is that they plan to use this thing for sedimentary survey work. As Brian previously demonstrated, sediments contain datable minerals (e.g. zircon, rutile, or monazite) of various ages, but the sediment cannot be older than the youngest grain it contains.
If some future civilization wanted to determine if the Mississippi delta sediments were Quaternary (that’s our current geologic epoch), they could dig out a whole lot of grains, and analyze them. The youngest monazites in the sediment will be from the Yellowstone Hot Spot, which is a quaternary rhyolitic volcano. Trouble is, the grains from that volcano are only a tiny portion of the total sediment load of the Mississippi, so in order to find them, you need to survey a huge number of grains.
The electron probe can be used to eliminate grains that can’t possibly be the right age- a grain with too much Pb is either too old or contains common Pb, while a grain with not enough Pb has either lost Pb or is too young. Since electron probe work is non-destructive, the grains identified with the electron probe can then be retrieved an analyzed using a more precise method, to get a high-confidence, precise date. So I suspect that is the Neoproterozoic application that this machine will have.
Neither probe technique possesses the precision of isotope-dilution mass-spectrometry, but isotope dilution cannot be performed in-situ.
It’s too late to proofread, so if there are any glaring fuck-ups, comment and I’ll fix them later.
* Usually, lambda is the decay constant, but scientists like to use the same Greek letters for a million different unrelated quantities, just to piss everyone off.
An electron probe bombards a sample with electrons with energies in the low tens of KeV. At these energies, these electrons can dislodge inner shell electrons in the material being analyzed. When an outer shell electron then decays to fill that vacancy, an Xray is produced, and the energy of that X-ray is characteristic to each element. So by attaching a bunch of X-ray spectrometers to an electron gun, you can determine the elemental composition of the target by analyzing the X-rays produced by electron bombardment. X-ray diffractometers basically use crystals with known lattice parameters to diffract the X-rays according to the X-ray wavelength.
Since X-rays are photons, the wavelength and energy are related by the equation e=k/lambda, where e is energy, lambda is wavelength*, and k is a constant, 1243 eV/nm. This constant can be derived by multiplying the speed of light (in nm) by planks constant (an obscure, extremely small physics number that relates fundamental physics properties to each other), and dividing by the number of joules per eV (not many).
Electron probes are great for determining the major and minor elemental composition of minerals, but they give no information on isotopic composition, since isotopes all have the same electron configurations. Since the only things being moved are electrons, this technique is non-destructive, unless you turn the power up too high and melt your sample by overheating it.
Ion probes are Secondary Ion Mass Spectrometers. They bombard a sample with ions, and those ions then ionize the target. The ions from the target are then accelerated into the mass spectrometer, sorted by mass. The ionization efficiency of ions is different for different elements, but the same for isotopes of the same element. Since the ions are sorted by mass, isotopes can be separated and their rations are easy to determine. Elemental ratios can also be determined if a standardization method to determine relative ionization efficiencies is used.
U/Th/Pb geochronology determines radiometric ages by using one or more of the following decay chains: 232Th -> 208Pb, 235U -> 207Pb, and 238U -> 206Pb. Comparing the results of two or more of these chains, allows a geochronologist to determine whether or not the geologic material being dated has lost or gained U, Th, or Pb since crystallization, since elemental loss or gain will disturb one or more of these systems. We can also measure 204Pb, which is not a decay product, to estimate how much initial Pb was in the material. Thermochronic may have explained this in more detail.
Because the electron probe cannot measure isotopes, it cannot determine whether or not a sample has lost or gained U, Th, or Pb, and it cannot identify the presence of common Pb. Thus electron probe dating has to assume closed system behavior and low common Pb. It is also limited to minerals with a fairly high U, Th, and Pb content, since the detection limits on the electron probe are generally in the tens of ppm. On the other hand, they are easier to use, more common, and non-destructive. And if you choose your geologic question such that Pb loss and common Pb are unlikely to occur, and high precision is not necessary, you can get useful numbers quickly and easily.
The article mentions a potential snowball earth application, so my guess is that they plan to use this thing for sedimentary survey work. As Brian previously demonstrated, sediments contain datable minerals (e.g. zircon, rutile, or monazite) of various ages, but the sediment cannot be older than the youngest grain it contains.
If some future civilization wanted to determine if the Mississippi delta sediments were Quaternary (that’s our current geologic epoch), they could dig out a whole lot of grains, and analyze them. The youngest monazites in the sediment will be from the Yellowstone Hot Spot, which is a quaternary rhyolitic volcano. Trouble is, the grains from that volcano are only a tiny portion of the total sediment load of the Mississippi, so in order to find them, you need to survey a huge number of grains.
The electron probe can be used to eliminate grains that can’t possibly be the right age- a grain with too much Pb is either too old or contains common Pb, while a grain with not enough Pb has either lost Pb or is too young. Since electron probe work is non-destructive, the grains identified with the electron probe can then be retrieved an analyzed using a more precise method, to get a high-confidence, precise date. So I suspect that is the Neoproterozoic application that this machine will have.
Neither probe technique possesses the precision of isotope-dilution mass-spectrometry, but isotope dilution cannot be performed in-situ.
It’s too late to proofread, so if there are any glaring fuck-ups, comment and I’ll fix them later.
* Usually, lambda is the decay constant, but scientists like to use the same Greek letters for a million different unrelated quantities, just to piss everyone off.
Tuesday, March 13, 2007
Ten years ago today...
March 14, 1997:
I am now in Australia. It is dry and partly forested, and actually reminds me of central California. I could swear we flew over a normal-fault scarp on the way here, but maybe I'm oversensitized. Everything (cars, toilets, shadows, etc.) goes the wrong way. The stars are amazing- I can see the Milky Way from the suburbs. Orion is upside down, and I saw the Southern Cross. I'm living in a house with three other students- Matthew, Richard, and Shin. A Korean and two Tasmanians. We had a party tonight- met both native and foreign students.
I am now in Australia. It is dry and partly forested, and actually reminds me of central California. I could swear we flew over a normal-fault scarp on the way here, but maybe I'm oversensitized. Everything (cars, toilets, shadows, etc.) goes the wrong way. The stars are amazing- I can see the Milky Way from the suburbs. Orion is upside down, and I saw the Southern Cross. I'm living in a house with three other students- Matthew, Richard, and Shin. A Korean and two Tasmanians. We had a party tonight- met both native and foreign students.
Friday, March 09, 2007
Drinking sweet holy Jesus
Shelley at scienceblogs has a post about bottled holy water. This reminds me of the after-dinner back-of-the-napkin calculation that my dad showed me as a kid, in order to demonstrate the size of a mole. The calculation is this:
Determine how many molecules of Jesus are in the glasses on the dinner table.
Assume the following:
That Jesus was a person who actually existed, that he died about 2000 years ago, and that the molecules of his body still exist on Earth.
If you believe in the literal ascension of Jesus to a heaven that is not chemically mixed with the Earth’s atmosphere, that’s fine too; keep following the math and I’ll get back to your point of view at the end.
Please also assume that Jesus weighed 65 kilograms (143 pounds), and that 60% of his body weight was water. This is not an unreasonable bodyweight for that time period, and it makes the math easy.
At present, we are only concerned with the water molecules in Jesus. How many are there? This is determined by dividing the total mass of water by the formula weight, then multiplying by Avogadro’s number.
65kg *0.6 water content = 39 kg of water
39 kg = 39000 g water
39000g / 18 g per mol = 2167 moles water.
2167 moles * 6.022e23 molecules per mole = 1.3e27 total molecules of H2O in Jesus.
So, where is this water?
Assuming that he evaporated during ascension, or was buried in a tomb that allowed evaporation or groundwater flow, then it is likely that Jesus’ water has found its way to the ocean. The ocean has a mixing time of approximately 1000 years, so since Jesus died almost 2000 years ago, it is a reasonable assumption that the water molecules of his body have been homogenously diluted by the entire volume of all the world’s oceans.
What is that volume? According to various sources, it is approximately 1.3 billion cubic kilometers. Because a cubic kilometer is a trillion liters, the total volume of the oceans is 1.3e21 liters.
So, to determine the Jesus dilution factor, we divide the total number of Jesus molecules by the total volume of the oceans.
1.3e27 molecules / 1.3e21 liters = 1e6 molecules per liter.
Each liter of ocean water (and rain water, and tap water, and wine) contains a million molecules of Jesus.
In other words, when multiplied by Avogadro’s number, Jesus is bigger than all the world's oceans- by six orders of magnitude.
For the information of any Satanists who wish to avoid drinking Jesus, I should point out that there are a few sources of water that are isolated from the hydrological cycle for timescales longer that the Reign of our Lord. Fossil aquifers such as the Ogallala or the Great Artesian Basin contain groundwater that fell as rain tens of thousands of years ago, well before Jesus’ time. The Greenland and Antarctic ice sheets sequester ice for hundreds of thousands of years, so only the top layer will contain molecules of Jesus. And fossil fuels, which have been preserved in the sedimentary record for tens to hundreds of millions of years, can be burned to produce CO2 and water vapor, which will be devoid of Jesus’ water molecules.
So what does this mean?
First of all, transubstantiation is moot. The wine already contains a million molecules per liter that derive from Jesus’ blood via the hydrologic cycle. So no further change is necessary. In fact, a miracle is only necessary if, as mentioned above, you believe that Jesus’ body literally disappeared from the Earth during ascension. And even then, the math will still get you.
Because even if a liter of water- or communion wine- doesn’t contain a million molecules of Jesus, it does contain an equal number of molecules of Judas*. And Pontius Pilate*. And of every other scoundrel, heathen, and prehistoric caveman ever to walk the face of the planet. So you’ll probably be needing that miracle you pray for.
The oceans may seem large. People may seem like insignificant specks on the surface of this pale blue dot. But Avogadro’s number is big enough to make up the difference and more. 6.022x10 23 is a very big number.
* This assumes that Judas and Pilate had the same stature and body mass index as Jesus, and that neither of them bodily ascended to heaven upon their demise. I think these are safe assumptions.
Determine how many molecules of Jesus are in the glasses on the dinner table.
Assume the following:
That Jesus was a person who actually existed, that he died about 2000 years ago, and that the molecules of his body still exist on Earth.
If you believe in the literal ascension of Jesus to a heaven that is not chemically mixed with the Earth’s atmosphere, that’s fine too; keep following the math and I’ll get back to your point of view at the end.
Please also assume that Jesus weighed 65 kilograms (143 pounds), and that 60% of his body weight was water. This is not an unreasonable bodyweight for that time period, and it makes the math easy.
At present, we are only concerned with the water molecules in Jesus. How many are there? This is determined by dividing the total mass of water by the formula weight, then multiplying by Avogadro’s number.
65kg *0.6 water content = 39 kg of water
39 kg = 39000 g water
39000g / 18 g per mol = 2167 moles water.
2167 moles * 6.022e23 molecules per mole = 1.3e27 total molecules of H2O in Jesus.
So, where is this water?
Assuming that he evaporated during ascension, or was buried in a tomb that allowed evaporation or groundwater flow, then it is likely that Jesus’ water has found its way to the ocean. The ocean has a mixing time of approximately 1000 years, so since Jesus died almost 2000 years ago, it is a reasonable assumption that the water molecules of his body have been homogenously diluted by the entire volume of all the world’s oceans.
What is that volume? According to various sources, it is approximately 1.3 billion cubic kilometers. Because a cubic kilometer is a trillion liters, the total volume of the oceans is 1.3e21 liters.
So, to determine the Jesus dilution factor, we divide the total number of Jesus molecules by the total volume of the oceans.
1.3e27 molecules / 1.3e21 liters = 1e6 molecules per liter.
Each liter of ocean water (and rain water, and tap water, and wine) contains a million molecules of Jesus.
In other words, when multiplied by Avogadro’s number, Jesus is bigger than all the world's oceans- by six orders of magnitude.
For the information of any Satanists who wish to avoid drinking Jesus, I should point out that there are a few sources of water that are isolated from the hydrological cycle for timescales longer that the Reign of our Lord. Fossil aquifers such as the Ogallala or the Great Artesian Basin contain groundwater that fell as rain tens of thousands of years ago, well before Jesus’ time. The Greenland and Antarctic ice sheets sequester ice for hundreds of thousands of years, so only the top layer will contain molecules of Jesus. And fossil fuels, which have been preserved in the sedimentary record for tens to hundreds of millions of years, can be burned to produce CO2 and water vapor, which will be devoid of Jesus’ water molecules.
So what does this mean?
First of all, transubstantiation is moot. The wine already contains a million molecules per liter that derive from Jesus’ blood via the hydrologic cycle. So no further change is necessary. In fact, a miracle is only necessary if, as mentioned above, you believe that Jesus’ body literally disappeared from the Earth during ascension. And even then, the math will still get you.
Because even if a liter of water- or communion wine- doesn’t contain a million molecules of Jesus, it does contain an equal number of molecules of Judas*. And Pontius Pilate*. And of every other scoundrel, heathen, and prehistoric caveman ever to walk the face of the planet. So you’ll probably be needing that miracle you pray for.
The oceans may seem large. People may seem like insignificant specks on the surface of this pale blue dot. But Avogadro’s number is big enough to make up the difference and more. 6.022x10 23 is a very big number.
* This assumes that Judas and Pilate had the same stature and body mass index as Jesus, and that neither of them bodily ascended to heaven upon their demise. I think these are safe assumptions.
Thursday, March 08, 2007
What is geochemistry?
Over at Green Gabbro, Yami says that she gets all technical and jargony whenever she doesn’t want to talk about work to normal people. I don’t do that. I can turn people off just by being very basic.
“I determine the chemical composition of rocks.”
My, what interesting shoelaces we all have.
As Chris recently posted, though, it isn’t about the rocks. Don’t get me wrong, geologists appreciate a pretty rock when we see it. But that isn’t why we study them. We study rocks because they tell us stories. And the stories are very cool.
Take geochemistry, for example. Actually knowing the composition of a rock is in fact pretty dull, if it’s just a list of elements and concentrations. The reason we study them, then, is to discover the process that lead to the composition that we measure in lab.
Different processes change chemical composition in different ways, so by measuring various elemental ratios, we can determine what a rock has been through. To start at the beginning, though, we need to acknowledge the sub-field of cosmochemistry.
Cosmochemistry is generally not the chemistry of the cosmos, it is actually the chemistry of the solar system. We have very few mineralogical materials that predate the formation of the solar system. The vast majority of non-terrestrial rocks that we have access to are from various solar system bodies.
The chief goal of cosmochemistry is to determine how the planets formed and what their composition is. Once the bulk chemistry of the Earth is determined, various processes that act on Earth can then be studied using geochemistry. The processes include, but are not limited to, the formation of continents, the evolution of the atmosphere and ocean, ancient and modern climate, the creation of economically significant ore bodies, and the pollution that results from the exploitation of such deposits. Some of these processes are pretty cool, which is why I like my job.
“I determine the chemical composition of rocks.”
My, what interesting shoelaces we all have.
As Chris recently posted, though, it isn’t about the rocks. Don’t get me wrong, geologists appreciate a pretty rock when we see it. But that isn’t why we study them. We study rocks because they tell us stories. And the stories are very cool.
Take geochemistry, for example. Actually knowing the composition of a rock is in fact pretty dull, if it’s just a list of elements and concentrations. The reason we study them, then, is to discover the process that lead to the composition that we measure in lab.
Different processes change chemical composition in different ways, so by measuring various elemental ratios, we can determine what a rock has been through. To start at the beginning, though, we need to acknowledge the sub-field of cosmochemistry.
Cosmochemistry is generally not the chemistry of the cosmos, it is actually the chemistry of the solar system. We have very few mineralogical materials that predate the formation of the solar system. The vast majority of non-terrestrial rocks that we have access to are from various solar system bodies.
The chief goal of cosmochemistry is to determine how the planets formed and what their composition is. Once the bulk chemistry of the Earth is determined, various processes that act on Earth can then be studied using geochemistry. The processes include, but are not limited to, the formation of continents, the evolution of the atmosphere and ocean, ancient and modern climate, the creation of economically significant ore bodies, and the pollution that results from the exploitation of such deposits. Some of these processes are pretty cool, which is why I like my job.
Wednesday, March 07, 2007
Paper predicament
A co-author wants the experimental & analytical methods section from me in a few weeks. That's fine, except that I used a new method that hasn't been described in the literature before. So theoretically, I should publish the method paper first.
Writing a section in a few weeks is easy. Writing the paper it's based on? Let's just say that I should spend a bit less time on blogs and beer drinking over the next few weeks.
Writing a section in a few weeks is easy. Writing the paper it's based on? Let's just say that I should spend a bit less time on blogs and beer drinking over the next few weeks.
Sunday, March 04, 2007
The Pseudoscientific Method
Click for larger print.
Hat tip to Janet for starting the flowchart frenzy last week.
Related post: Women in pseudoscience.
Friday, March 02, 2007
Baby steps
As I mentioned last week, we’re expecting a baby soon. Which means, after the insomnia and the thousands of diapers, we’ll be hearing the pitter patter of tiny little feet. Only they won’t be. I’ve got size 14’s, so if genetics still works the way they taught me in high school, we should be prepared for little footprints more like these:
Thursday, March 01, 2007
Paper pause
Expect light blogging for a while, I’m trying to write up the erotic alkali stuff- the project from which I presented a data datum point at the Goldschmidt conference. As you can see from the figure below, I collected too much many data. So now I have to make sense of it them, or at least come up with good excuses to throw the crappy stuff out.
For example, excluding the days when we were running with the designated high alkali cones gives something like this- which almost looks vaguely half-under control for some of the time.
Even after I get a handle on all this, I still need to write the damn thing- and I suck bigtime at writing papers. So the lounge may be a bit quiet for a while.
For example, excluding the days when we were running with the designated high alkali cones gives something like this- which almost looks vaguely half-under control for some of the time.
Even after I get a handle on all this, I still need to write the damn thing- and I suck bigtime at writing papers. So the lounge may be a bit quiet for a while.