Thursday, July 3, 2014

What is Dark Matter and Dark Energy?

In a  previous post, I provided a long explanation of what is Dark Matter and what is Dark Energy.
This post is a quick summary that I wrote in response to a post at Dispatches from Turtle Island titled "CDM looks better relative to WDM with better models."

Here's a quick summary of what is my best guess to explain dark matter/energy.

Dark matter = 2-10 keV rest-mass, mostly-sterile neutrino (spin 1/2),  quantum degenerate in galaxies which prevents the dark matter from clumping together

Dark energy = the quantum degeneracy pressure from ~0.01 eV rest-mass, active light neutrino (spin 1/2), most of these active neutrinos are made as the mostly-sterile neutrinos each decay into many light neutrinos.

Below is my comment:

Tuesday, July 1, 2014

Rosencrantz and Guildenstern Are Alive: The case for Edward de Vere

I've been taking a break from energy and physics, and delving into the topic that caused me to pick the pen name that I did for this blog: Eddie Devere.
Yes, this is a play off of the names  Eddie Vedder and Edward de Vere, two artists I admire greatly.
I was caused to delve into the Shake-Spear authorship question by a friendly email from Alan Tarica, who sent me a link to a website (Forgotten Secrets) he created in which all 154 of William Shake-Speare's Sonnets are available to read, along with Alan's comments. While there's a lot to read, Alan makes a very convincing case that the Sonnets are written by Edward de Vere, and that the sonnets are written to Queen Elizabeth and the Earl of Southhamption, who is likely the son of Edward de Vere and Queen Elizabeth.  While personally think that there's still some debate as to whether the Earl of Southhamption was the bastard child of Edward de Vere and Queen Elizabeth, I have virtually no doubt that  Edward de Vere used the pen name William Shake-Speare. The goal of this blog is to give a summary of the main arguments why Edward de Vere is the actual author of the sonnets, narrative poems, and plays that were written under the pen name William Shake-Spear.


Trying to determine who is the author of these sonnets, narrative poems, and plays is like going down the rabbit hole or getting stuck in the Matrix. It's easy to get lost in a world of Elizabethian politics, paranoia, and conspiracy theories. But let's not get stuck down in the rabbit's hole.
Let's ask ourselves one simple question: what do famous authors write about?  Answer: they write about what they know best.

What did James Joyce write about? what about Faulkner? Virginia Woolf? They wrote about what they knew best. Ireland, the South, and depressed women.

So, let's look at a few of the many possible authors of the Shake-spear collection:  Francis Bacon, William Shakspear, Edward de Vere, Queen Elizebeth, Christopher Marlowe, and Ben Johnson.

Now let's ask the question: what did Francis Bacon write about? He wrote about science and religion. His most famous text (Novum Organum) is a philosophical text about the methods of science, that is written in bullet format. It's pretty dry, just like Aristotle's lecture notes "The Organum", of which this text is based. Francis Bacon just didn't have the literary skills to write the Shake-spear collection, even though he might have had the education to have done so. 

Now what about William Shakspear? We don't know much about William Shakpeare of Stratford-upon-Avon. But one thing is abundantly clear. William Shakpeare of Stratford-upon-Avon was not capable of writing poems, let alone sign his name. William Shakpeare's will makes it abundantly clear that William Shakpeare is not a world famous playwright. Likely, what happened is that, after the death of William Shakpeare of Stratford-upon-Avon, the local church in  Stratford-upon-Avon tried to make it look like William Shakpeare of Stratford-upon-Avon was William Shake-speare, due to the similarity of the names and the fact that nobody else had stepped forward as the author of the poems and plays.

So, let's once again ask the question: what did Edward de Vere write about? Guess what!  Edward de Vere wrote poems about love and melancholy, and with a lot of references to Greek&Roman mythology. Here's a link to some of the poems. But that's not all. As detailed in a Front Line documentary make in 1989 of the Shake-spear question, it was well known at the time that Edward de Vere wrote under a pen name. (See the end of the following website for quotes from famous writers who list Edward de Vere as an excellent poem and playwright.)


A lot of authors write under pen names. Here's a wiki list of some of the famous ones. Some of the most famous include: Ben Franklin (Richard Saunders of the Poor Richard's Almanac), Mark Twain (Samual Langhorne Clemens), Pablo Neruda, Moliere, Lewis, Carroll, Mary Ann Evans (George Elliot), George Orwell (Eric Arthur Blair), Leslie McFarlane (of HardyBoys fame), J.K. Rowling, O. Henry, Isaac Asimov, V. Nabokov, Sylvia Plath, Soren Kierkegaard, Lemony Snicket, Woody Allen, and many, many more.

The assumption should be that the name on a book or play is a pen name, unless there is some direct proof that it's not a pen name. As such, there's no direct proof that William Shakspeare wrote the poems and plays of William Shake-speare. For example, we have no evidence that William Shakspeare could even write; we have no evidence in his will that he wrote poems/plays; and we have virtually no evidence from his original gravestone monument that he wrote plays/poems. (See image below.)


Saturday, June 21, 2014

Comparison of the Wealth of Nations: The 2014 Update

Yup, you guessed it. It’s that time of year again. BP just released their latest updates for the production and consumption of energy throughout the world. Before I get into the details of the analysis, I want to point out that there is one major change in my analysis compared with previous analyses that I’ve posted on this site. (These links go to the previous posts in 2011,  in 2012, and in 2013 on the Wealth of Nations.) The one change that I’ve done is that I’ve included a new form of useful work: coal consumption for non-power plant applications. In the developing world, ~90% of all coal is consumed in power plants. However, in places like China, the consumption of coal in power plants is only ~60% of total coal consumption. Therefore, in this update to the “Wealth of Nations” calculations, I’ve included a new term that takes 10% of the coal consumed for developed countries and 35% of the coal consumed for developing countries. This number is then multiplied by 10% to reflect the fact that the enthalpy content in the coal is typically only being converting into low-grade energy, whose exergy is only 10% of its enthalpy content. This is similar to the existing term I have for non-power-plant consumption of natural gas (i.e. NG for home-heating.) The main result of this additional term is that the useful work generation has increased in China by 14%, in India by 10% and in Russia by 3% compared with the useful work generation if this term were not included. If this term is included, then China’s useful work generation has been greater than the US’s useful work generation since 2011. In other words, China actually has had the world’s largest economy since ~2011.

Here are some other conclusions before I get into a detailed breakdown of the analysis for this year.

(1) The US economy (as measured in [TW-hrs] of useful electrical and mechanical work produced) increased by 1.8% in 2013 compared with 2012. This is much better than the -1.5% decrease in useful work output between 2012 and 2011.
(2) There were two countries with negative growth rates between 2012 and 2013: Japan (-2.0%) and the UK (-1.5%.) And there were two countries with near-zero growth rates: Germany (0.3%) and Russia (0.2%.) The major countries with the highest growth rates were: China (1.8%), India (4.4%), and Brazil (3.7%.)
(3) The purchasing power parity GDP (i.e. PPP GDP) is a pretty good reflection of the wealth of country, i.e. the capability to do mechanical and electrical work, when comparing developed economies (such as Germany, Japan, USA and UK.) However, the calculation of the GDP appears to be biased against a few countries, especially Canada and Russia, but also China and Brazil. I can understand why the IMF would be biased against Russia (i.e. black markets and collective farming likely aren’t being accurately reflected in the GDP calculation), but I still have no clue why the IMF and other world organizations consistently underestimate the size of Canada's economy. If I were a Canadian representative for the IMF, I would voice my concern that the IMF is underestimating the size of the Canadian economy by at least two fold.


So, now I'm going to present a more detailed breakdown of the analysis and present the data in graphical form. 

Wednesday, June 4, 2014

US CO2 Emission Reductions: A Good Start, but Much More is Needed Globally

As I've mentioned in a previous post, global emissions of CO2 are a major problem because the people who will be harmed the most of the higher temperatures and lower ocean pH are not those who are emitting the most CO2.
Before getting onto the main points of this post, I'm going to summarize the main points from that previous post (i.e. why CO2 emissions are a problem.) The reason I'm summarizing this is that I still have many family members who get their news from Fox News, and hence think that CO2 emissions is a good thing.    ;-{

(1) There is a clear link between CO2 levels in the atmosphere and fossil fuel combustion (due to the decrease in oxygen at the same time that CO2 is increasing and the change in the isotope ratios of carbon 13 to carbon 12 in the atmosphere.)
(2) There is a clear link between CO2 levels in the atmosphere and lower pH levels in the ocean (more CO2 means more acidic oceans, which in turn can lead to coral bleaching.)
(3) There is a clear link between CO2 levels in the atmosphere and less IR radiation leaving the atmosphere at the IR frequencies at which CO2 absorbs.)
(4) Since there is a partial overlap between the absorption frequencies for CO2 and H2O, the addition of CO2 into the atmosphere will have a greater effect on temperature in those locations where there is less water vapor.  (i.e. CO2 is fairly well mixed in the atmosphere, but water vapor concentration is highly dependent on local temperatures and relative humidity.)
(5) Predictions of models match well with experimental data. (meaning that temperatures are increasing the most in those locations where there wasn't much water vapor to start...i.e. the poles, deserts, and most other places in winter at night.)
Climate Model and Temperature Change
(6) All other possible causes of global warming have been debunked. (i.e. it's not the sun, it's not volcanoes, and it's not natural fluctuations...i.e. Milankovitch cycles.)

If you want a more depth summary of the case for why we need to significantly reduce CO2 emissions, please read the following articles from the website Skeptical Science. (which if you're not familiar, is a website devoted to debunking Climate Skeptics.)

Tuesday, May 20, 2014

Spacetime Expansion (i.e. Dark Energy) is due to the production of Quantum-Degenerate Active Neutrinos

I'd like to summary what I've been trying to put into words over the last few years at this site. This article is still in rough draft form, and I will likely be editing it over the next few weeks as I improve the main argument.

Dark energy is not actually actual energy. Dark energy is just the expansion of spacetime because matter (mostly keV sterile neutrinos dark matter) is slowly turning into active neutrinos, which are relativistic & quantum degenerate.

Assuming that the rest mass of lightest active  neutrino is 0.001 - 0.06 eV and assuming that their temperature right now is 2 Kelvin, then their de Broglie wavelength is between 0.3 mm and 2 mm.

Also, using estimates for the electron neutrino density of 60-200 per cubic cm,  the average spacing between electron neutrinos is between 1.7 and 2.5 mm. These two numbers are extremely close to each other, which means that the lightest neutrino is quantum Fermi-degenerate, i.e. you can't pack more into a region than given by their de Broglie wavelength cubed. (Just as you can't pack more electrons into a metal than its de Broglie wavelength cubed, without increasing its temperature.)

The pressure of a relativistic Fermi quantum gas is only a function of the number density of fermions. The pressure (in units of mass per volume) is proporitonal to planck's constant divided by the speed of light, all multipled by the number density to the 4/3rd power. By starting with the density of dark matter at the recombination time (z ~ 1100), and assuming that the number of light neutrinos that can be created from a heavy neutrino is equal to the ratio of their rest masses, then I estimate a degenerate pressure of 10^-30 grams per cubic cm. This number is surprisingly close to the current mass density of dark matter (~5*10^-30 grams per cubic cm) and is only a factor of 10 less than the required dark energy pressure of 10^-29 grams per cubic cm. This means that with a few(somewhat) minor tweaks to my calculations, I could derive the dark energy "pressure" from the equations of relativistic, quantum degenerate neutrinos. (Note: here's a link to an article saying that relativistic, quantum degenerate neutrinos can't be the source of dark energy because the pressure is way too low. However, in that article, they assume a rest mass of the neutrino of 0.55 eV and don't assume that neutrinos can be generated from dark matter. When you change the rest mass to ~0.01 eV and include other sources of neutrinos from the decay of dark matter into many light neutrinos, then the quantum degeneracy pressure of neutrinos is large enough to explain dark energy. To be clear, my argument rests on a still on proven statement:  that a ~2 keV sterile neutrino can slowly convert into ~10^5 active neutrinos of ~0.02 eV rest mass. If this statement is true, then we can explain why neutrinos have mass, what is dark matter, and what is dark energy.)

So, if dark matter can slowly turn into active neutrinos over time, then dark energy might just be the quantum degeneracy pressure of relativistic, quantum degenerate neutrinos. I'll continue in the rest of this post to make this argument stronger.

During the Big Bang, there would be a large amount of active neutrinos produced, and then some more would be produced as sterile neutrinos (i.e. dark matter) slowly converts/oscillates into light active neutrinos. Spacetime expands as active neutrinos are generated because it can't be any smaller than that which would be required to keep the de Broglie wavelength cubed times the number density less than 1.

As seen in the image below, there is stringy areas and clumpy areas. It is entirely possible that the stringy areas are regions in which the dark matter is mostly light neutrinos (formed after recombination via break down of heavier, sterile dark matter) and the clumpy areas (i.e. galaxies) are regions that mostly hold the keV sterile dark matter. What is keeping the whole universe from collapsing might be the quantum degenerate pressure of the lightest active neutrino.



Monday, May 19, 2014

What is the curvature of spacetime?

This post was updated on June 30 2014.
The astrophysicists community is currently in a heated debate about the implications of the BICEP2 measurements of B-mode waves in the cosmic microwave background (CMB.)
[For non-experts, the CMB is the nearly spatially-uniform (isotropic) radiation that we receive in whichever direction we look. The radiation matches with the radiation of a blackbody at a temperature of 2.726 /- 0.0013 Kelvin. This radiation is nearly uniform, with only small fluctuations. This near uniformity can be contrasted with the extreme density fluctuations we see in matter. Before the temperature of the universe cooled to below 3000 K (~0.3 eV), the density of hydrogen and helium in the universe was rather uniform because the hydrogen and helium were ionized (i.e. plasma), and in constant contact with the photons that today make up the CMB. Only after the temperature dropped below 3000 K, could the helium and hydrogen decouple from the radiation, and clump together to form local dense spots (which eventually turned into galaxies, stars, and planets.) It appears that the dark matter was already much more lumpy than photons and non-dark-matter at this point in time, so when the non-dark-matter decoupled from the photons, it started to fall into the local gravitational wells caused by the lumpy dark matter. (Note that my guess of why dark matter did not become extremely clumpy is that dark matter has a rest mass of ~2-10 keV and is prevented from being clumpy due to Fermi quantum degeneracy as neutrons are in neutron stars.)]

Thursday, May 8, 2014

Dynamic Simulation of the Universe

I noticed that the BBC ran an article this morning about a Dynamic Simulation of the Universe with Cold Dark Matter that was recently published in the journal Nature.

Galaxies
The simulation produces galaxies of different shapes and sizes that astronomers see in the real Universe

I think that simulations like this are fascinating and I encourage more people to run simulations like this.
However, I find it extremely odd that the researchers didn't state in the paper what was the mass of the dark matter particle in their simulations. They just mention Cold Dark Matter. It seems odd that the reviewers in a journal as well-recognized as Nature would have let this paper be published without requesting that the authors provide the mass of the Dark Matter particle.
If anybody knows what is the mass of the Dark Matter particle they used in their simulations, please comment on this post and provide us with the value of the rest mass of the Dark Matter particle.
Thank you

Also, to those people who work in this field, I have a request:
Try running a simulation in which the universe is simulated with all of the following properties: (1) a wrinkled surface on an expanding 4D sphere (i.e. General Relativity in 4D with non-isotropic mass density), (2) the radius of the 4D sphere expands only when there are time irreversible collisions (i.e. collisions involving the weak nuclear force are the cause of the expansion of space-time while GR tells how space-time is curved), and (3) the dark matter particle has a mass between 2-10 keV.

It seems to me that simulations that are missing (1), (2), and (3) above are missing major components required to actually simulate the evolution of the universe.

Wednesday, May 7, 2014

Center of Mass of the Universe: More thoughts on Symmetries and Conservation Laws

I was re-reading Feyman's "The Character of Physics Law" when I stumbled upon some interesting sentences. (Chapter 3, Pg 82 starting just above Figure 22.)
"In this way the conservation of angular momentum implies the conservation of momentum. This in turn implies something else, the conservation of another item which is so closely connected that I did not put it in the table. This is a principle about the centre of gravity...The point is that of all the stuff in the world, the centre of mass, the average of all the mass, is still right where it was before."

If you take a large collection of particles, the forces of interaction between the particles is not capable of changing the center of mass. The position of the center of mass only changes if the center of mass was already moving with a certain velocity, V. This velocity, V, is unaffected by the forces of repulsion and attraction between the particles. If this velocity, V, were zero to start, then it would always stay that way, unless acted upon by particles that weren't included in the original set when calculating the center of mass. But what if we take the collection of particles to be all of the particles in the universe?

In my understanding of our universe (which is different than many astrophysicists and physicists), the universe is a wrinkled, expanding surface of a 4D sphere,where the radius of the sphere is the time dimension. The center of mass of such a surface is the center of the sphere. The center of the 4D sphere is the location in space-time of the Big Bang  (r=t=0.) So, while the surface my be wrinkled (due to local variations in the density of matter), the variations have to cancel exactly on average, so that the center of the mass of the universe is still exactly the the center of the 4D sphere. If there were any fluctuations (i.e. fluctuations such that there is an increase in mass/energy that is not exactly balanced on the opposite side of the sphere), then the location of the center of mass would be offset from the center of the sphere.

One can hopefully see that this has major implication for quantum gravity as well as R-type quantum theories that require a collapse of the wavefunction. The introduction of uncertainty into the location of massive objects might cause us to have to drop an important conservation law (i.e. the conservation of the momentum...which leads to the conservation of the center of mass in specific circumstances.) For example, if the mass inside of a blackhole were to randomly fluctuate, then this could cause the center of mass of the blackhole to change, and then would in turn cause the center of mass of the universe to go off center.

There are clearly some problem reconciling quantum mechanics with certain conservation laws, such as the conservation of momentum, because if the location of electron in an atom were truly random (i.e. stochastic) before we measure it, then this could cause the center of mass of the universe to be ever so slightly off center (unless there were an completely symmetry fluctuation on the opposite side of the universe that kept the center of the mass constant. Of course, such a symmetric fluctuation would imply that electrons on opposite sides of the universe can communicate with each other at speeds much greater than the speed of light.) This is one of the many reasons why I'm skeptical of introducing stochastic (i.e. probabilistic) processes into the laws of physics. In order to introduce stochastic processes into nature, you run into potential problems of either (a) communication at infinite speed in order for the fluctuations to cancel out, or (b) throw out the conservation of momentum...i.e. allow of the center of mass of the universe to move stochastically around the origin as particles randomly appear here and disappear there. (Note U-type Quantum mechanics is deterministic, it's only R-type Quantum mechanics that is stochastic. For more discussion of U vs. R QM, see Chapter 22 of Penrose's The Road to Reality.)

Another interesting question is:  what is the total angular momentum of the universe? Is there an axis about which the 4D sphere is rotating?
I think that this question is similar in nature to following questions:  what is the total electrical charge of the universe? what is the total weak charge and total color charge of the universe? It is entirely possible that the answer to all of these questions is zero.

In the remainder of this post, I'll be summarizing the concepts discussed in \Figure 14 in Chapter 3 of Feyman's "The Character of Physical Law."  The ideas that Feynman discuss here can be found on any website (such as wiki) that covers Emmy Noether's Theorem that Conservation Laws imply continuous symmetries of differential equations (and vice versa.)

Sunday, April 27, 2014

Similarities and Differences between the CKM & PMNS Matrices

In the Minimal Standard Model (MSM), neutrinos do not have mass. They do not have mass because the creators of the MSM assumed that there are no sterile neutrinos (i.e. right-handed neutrinos and left-handed anti-neutrinos.) By including sterile neutrinos, it's fairly easy to build a theory that predicts massive neutrinos and neutrino mixing. (This is not new...see neutrino minimal standard model, vMSM.) However, the vMSM has yet to be confirmed experimentally, and there are still some remaining questions that need to be answered in the vMSM, such as: what is the mass of the sterile neutrinos? Can the masses of the neutrinos be predicted in advance of experimental measurements?

Given that neutrinos mix into each other, scientists have indirectly shown that neutrinos have mass. (Other indirect evidence for neutrino mass is that neutrinos from the 1987 supernova explosion arrived later than photons from the explosion...and the less energetic neutrinos on average arrived later than the more energetic neutrinos...in line with the special theory of relativity.) So, it's now been obvious that the MSM is not a valid theory of the universe (and this is not news.) But the MSM seems to do quite well at predicting the rates of interaction between quarks and leptons. So, we are looking into how to tweak the MSM to account for neutrino mixing, but without messing up the parts of the MSM that work extraordinarily well.

In this post, I want to discuss the similarities and the differences between the CKM and PMNS matrices because this are the two matrixes (that we know of so far) in which there is CP-symmetry violating physics (and hence T-symmetry violating physics. ) There is an eerily close resemblance between the CKM matrix and the PMNS matrix, but the eigenvalues of the matrices are not exactly the same. It is the PMNS that shows the strongest CP-violation, but it is also the matrix that has the most uncertainty in the values. Hence, it is crucially important that we decease the uncertainty in the values of the PMNS matrix by creating more precise measurements of neutrinos. Only by decreasing the uncertainyty will it be clear what is the strength of the CP-violating term in the matrix, and perhaps whether we need to include sterile neutrinos into the PMNS matrix.



Thursday, April 24, 2014

News update on Sterile Neutrinos

I'm currently working on a post on how the CP-violating term in the PMNS matrix is indirect evidence both for massive neutrinos and for a sterile neutrino.
In the mean time, I wanted to let readers of this blog know that you can read the following paper that was just recently published by Physical Review Letters, in which the author shows that a 7 keV sterile neutrino is an extremely plausible candidate for dark matter. The paper can also be found on the Arxiv website, where it was posted back on March 4th.
Their main results are summarized in Figure 1 of the paper. They suggest that recent observation of a 3.57 keV X-ray line can be explained by a 7.1 keV sterile neutrino with a mixing parameter of approximately 3*10^-11  (sin squared of two times the mixing angle to be precise.) This leads to predictions of the temperature at which the sterile neutrinos were produced.

I think that this is certainly interesting research, and the model works much better than ColdDarkMatter models. However, there are still a lot of assumptions in the model, and I would hesitant to make any firm statements about whether a sterile neutrinos have been discovered. There is clearly no discovery because the research is no where near "5sigma" certainty (like what has been required to claim discovery of new particles.)

The reason that I mentioned this article is that hopefully NSF, NASA and European equivalent agencies devote more time and effort to searching for WarmDarkMatter and Sterile Neutrinos.
(Sterile neutrinos can explain why ordinary neutrinos have mass and are intriguing candidates for dark matter. I'm still a little shocked that so much time/money/effort has gone into looking for GeV ColdDarkMatter. Unfortunately, this is likely all just a side-effect of the physics community irrational devotion to supersymmetry and superstring theory. It's good to see that the hype of supersymmetry, superstring theory, and ColdDarkMatter is finally being realized by the public at large.)

Wednesday, April 16, 2014

Cold Dark Matter is an Oxymoron

(Note that this is a continuation of previous post in which I point out that Heavy Dark Matter is an Oxymoron.)

Anybody else tired of the science media jumping on every piece of evidence for Cold Dark Matter, and turning it into possible evidence for string theory, supersymmetry, and the multiverse. I wish that the scientific journalists at Scientific American and New Scientist thought critically about the physics news that they are reporting. How can a particle be heavier than a proton, but have no electric charge or strong nuclear 'charge'?
Cold Dark Matter is an oxymoron because the rest mass of a particle is related to its capability to interact with other particles and/or fields (especially the Higgs field.) Heavier particles have more interactions with other particles, whereas lighter particles have less interactions. Mass is proportional to the number and strength of the particles interactions with other particles. Saying the words "Cold dark matter" is like saying the words "Skinny fat people." It just doesn't make sense because a particle can't be both heavy in mass but light in interactions.  (Note that this is also why I think that supersymmetry and any supersymmetric string theories are silly...if your theory invents new particles that are really heavy, but hardly interact with anything...such as gravitinos or neutralinos...then please throw your theory away and start from scratch. You are missing the whole point...mass related to capability to interact. Note that the same goes for a theories that predict 'sterile' neutrinos of GeV or TeV mass.)

But let's step back for a second, and ask the question: what are the implications of GeV dark matter?

In order to have GeV dark matter, you need to explain the following:
(1) Why there's no evidence for the GeV dark matter particles in any of the particle collider experiments? Why haven't we seen any of these particles when we collide together matter/anti-matter pairs with TeV of energy?
(2) Why doesn't the GeV dark matter just clump together at the center of galaxies?  The reason that we invented the concept of dark matter was to explain the higher than expected velocity of stars on the outer-edge of galaxies (and of higher than expected velocity of entire galaxies rotating about each other.)
GeV cold dark matter would just clump together because there's nothing (except Fermi-Dirace statistics and perhaps the weak force) to keep the particles from clumping together into an extremely tight ball. The fact that the recent "evidence" for GeV dark matter is coming from GeV gamma ray emission only in the center of the galaxy is a tell-tale sign that it's not coming from dark matter, but rather that it's coming from objects with extreme temperatures.
(3) According to astrophysical observation, there's no spike in the density of dark matter in the center of galaxies. Dark matter is actually quite diffuse in galaxies, and even extends out past where there's no more stars. (see image below from the new movie Dark Universe.) So, why would there be spike in the GeV emission at the center of galaxies?  (It's not due to dark matter collisions, or else it would be diffuse throughout the galaxy.)

Saturday, March 1, 2014

What is the cause of the Arrow of Time?

This is a dialogue between a Sophist and a Platonist. The topic of the dialogue is: What is the cause of the arrow of time?

The participants of this dialogue are: Socrates and Sean Carroll

Location:  This dialogue takes place in a coffee shop near the ocean in California

Socrates:  Sean, you seem to be saying that the laws of physics are all time reversible, but that the motion of particles can still be time asymmetric. If I understand your argument, then you are saying that we can tell past from future, at least right now, because the future will have higher entropy than the past. You seem to state that this is due to the fact that it is more probable for a system to be in a state of high entropy rather than low entropy.

Sean Carroll: That's right. You have stated my position correctly. The universe started in a state of low entropy and gradually the entropy is increasing. The most probably state of the universe in the future is for it to be in a high state of entropy than the past. Though, if in the future, the universe reaches complete equilibrium, then we will see small fluctuations about this maximum value of entropy. Well, that is of course if there is such a thing as maximum entropy, and there is also the caveat that there might not be a 'we' to measure the entropy that far in the future.

Socrates: You are saying that time will continue to increase even after we reach equilibrium. I think that I understand your position. Let me rephrase what I think that you're saying:  If the state of the universe were probabilistic, and if you were to look at the state of the universe, then most of the time it should be in a state associated with the highest entropy. Though, if the state of the universe were probabilistic, then it might be possible for the universe to be far-from-this-maximum-entropy state. But tell me, Sean, why is the universe in a state so-very-far-from-this-maximum-entropy state?

Sean Carroll: That's because the universe started with a very-low entropy Big Bang. And the universe is still in the process of increasing its entropy. We are headed to a state of maximum entropy, but that is not for some time in the future, and perhaps, if the universe continues to expand, it might never happen. The entropy might just continue to increase as the universe increases.

Tuesday, February 18, 2014

7 keV sterile dark matter?

It's a good day when you wake up and see the U.S. medal in your favorite Winter Olympic sport (SBX), and you see a blog post at Resonannces with a good discussion about a topic of interest: dark matter.
The Resonnances blog post discusses a manuscript by Bulbul et al. recently uploaded to Arxiv about X-ray emission lines ~3.5 keV that can't be attributed to known atomic spectra. The authors of the manuscript attribute the emission to sterile dark matter particles with a mass of ~7 keV. Though, it should be noted that there are other, less likely, explanations to the emission at 3.5 keV. The manuscript discusses some of the other possible explanations. As seen below in the graph at the Resonannces website, the emission line at 3.5 keV is consistent with other experiments, and in region of parameter space that has yet to be ruled out.

(Image from http://resonaances.blogspot.com/)


What I'd like to add to the discussion is that this value of dark matter mass is very close to the 95% confidence window from computer simulations by  Horiuchi et al., whose 95% confidence window as 6-10 keV in one set of data and 8-13 keV in a second set of data. (shown below)

While there's still a large amount of uncertainty about what is the cause of dark matter, it appears that there is starting to be some convergence between experiments and computational simulations. And I hope that the recently submitted manuscript by Bulbul et al. will convince NASA to fund more research into analyzing X-rays in the ~0.5 keV to ~5 keV range as possible signals of sterile neutrinos decaying into fertile neutrinos. Of course, the term sterile and fertile neutrino are misnomers because sterile neutrinos aren't completely sterile (w.r.t. to the weak nuclear force) or else they wouldn't be able to decay to normal neutrinos, and it should be pointed out that normal neutrinos, electrons and quarks are not always fertile (w.r.t. to the weak nuclear force) because as they zig-and-zag, they go between being fertile and sterile.

I also want to point out that it does seem intuitively strange that "mostly" sterile neutrinos are heavier than the "mostly" fertile, normal neutrinos. This seems to violate the trend that the fundamental particles with more mass also have more forces with which they can interact. Therefore, it's important to point out that there is still a lot fundamental physics that we don't understand, even if it turns out that dark matter is ~7 keV sterile neutrinos.

Update: Here's a link to a paper by a separate group that also found a 3.5 keV signal in the X-ray spectra from two galaxies.

Wednesday, February 12, 2014

Evidence for Massive Neutrinos, which also Interact with the Earth

Just want to highlight the following research paper by physicists in the UK.

Massive neutrinos solve a cosmological conundrum


They estimate that the sum of the masses of neutrinos is 0.32 eV +/- 0.081 eV.
You have access to APS journals, you can find their paper here.

It's unclear to me what is the connection between this group's findings and the 2-10 keV  particle that seems to explain dark matter. So, I welcome feedback in the comments section.

I'd also like to highlight some recent research from Japan, showing that solar neutrinos interact with the Earth. In other words, as the solar neutrinos pass through the Earth, they can be convert from one type of neutrino into another type of neutrino faster than if the neutrinos were travelling through a vacuum.
Pretty cool that, once again, predictions using the Standard Model were confirmed experimentally!

Wednesday, February 5, 2014

Recent Experimental Measurements of the Weak Nuclear Force: Implications for the Arrow of Time

I wanted to highlight some recent experiments conducted at the Jefferson Lab in Virginia. The group measured the interactions of electrons with quarks, and was able to measure the weak nuclear interaction between these particles with greater precision than any previous experiments. (I'll link to the journal article as soon as it is published.)

They quantified the breaking of the mirror (P) symmetry of the weak nuclear force. Though, it should be point out that this type of measurement is not new. It has been know for a long time that the weak nuclear force violations P, as well as T & CP symmetry.
My main point in highlighting this research is that this measurement was much more precise than previous measurements and that this measurement is in agreement with the Standard Model of physics. (i.e. most data for the Standard Model and more data that reduces the likelihood that there is Beyond Standard Model Physics at the <10 p="" scale.="" tev="">
My secondary goal in highlighting this research is to highlight that the weak nuclear force is present in collisions between electrons and quark, which means that it's present any time molecules collide with sufficient velocity. This in turn means that the weak nuclear force is most likely the cause of the arrow of time.

Notice that we never see an arrow of time when there's only Boson particles or when Fermi particles are interacting only via Gravity, E&M or the Strong Nuclear Force.
(Try determining which way a movie is running for the following phenomena: superconductivity, superfluid helium, photons travelling in the vacuum of space, or planets orbiting a star.)
The arrow time only exists when there are Fermions interacting via the weak nuclear force.

As such, it's important for us to recognize that Boltzmann's assumption of molecular chaos is not required in order to obtain time-asymmetric equations of motion. You just need to include the weak nuclear force (which occurs only when Fermions collide with sufficient energy.)

I also wanted to let readers know that I'm working on a Socrates dialogue between a defender of Boltzmann's molecular chaos assumption and a defender of the theory that the weak nuclear force is the cause of the arrow of time. I'm hoping that, after reading this dialogue, one will be able to see the problems with the assuming that the reason for the arrow of time is that there is molecular chaos (i.e. randomization of velocities after collisions.) This assumption is quite useful for most problem of engineering interest; however, it's doesn't actual teach us what is the real cause of the arrow of time. (And therefore needs to be scrapped and replaced.)

The real cause of the (one and only) arrow of time is one time asymmetric term that shows up in the weak nuclear force. This means that the real way to determine rate-based coefficients (such as diffusivity, thermal conductivity, and electrical conductivity) is to include the weak-nuclear force into computer simulations of molecular models. Assuming molecule chaos gets us pretty close to the right answer, but it's likely that there are some cases where we can do a better job in predicting transfer coefficients using first-principles than in making Boltzmann's assumption of molecular chaos.

Friday, December 13, 2013

Partial Deregulation in Mexico's Energy Sector

I want to spread awareness of some breaking news today in Mexico.
The lower House of Representatives has passed a bill that allows foreign companies to own up to 50% of energy companies in Mexico. This includes oil, natural gas and electricity companies.
The bill still needs to pass in the Upper House of Representatives.
If you haven't read the news already, check out the following story by the LA Times.

I think that this bill is a step in the right direction. Government monopolies over the energy sector are never as effective as private companies, so I'm glad to see this partial deregulation of the oil/NG/electricity sector. However, it's only a partial step, and it doesn't do what's ultimately required for real positive change.

(To my knowledge) The bill doesn't give landowners back their mineral rights.
This has been one of the major problems in Mexico. The mineral rights are owned (and still will be owned) by the government.

The Current Law: Per the Federal Mexican Constitution, the Federal Mexican Government owns and holds all the mineral and petroleum resources located under the surface of the ground (In other words, the owner of land in Mexico only owns the surface thereof and any non-restricted treasure therein).

So, until the Mexican government gives the mineral rights back to the landowners, I'm slightly skeptical that we'll see huge increases in production of oil&gas in Mexico's.
The new law is a good first step, but it's only the beginning towards a free market.



(Side note:  I'm completely in favor of capping CO2 emissions from combusting fossil fuels. The reason I want to see free energy markets is that I think that we'll be drilling for oil&gas long after we stop emitting CO2 into the atmosphere because we still need oil&gas for making plastics. Also, we can capture and storage any CO2 generated at power plants. So, you can be pro-oil&gas development and pro-capping-CO2 emissions. The two are not exclusive.)

Sunday, November 24, 2013

Energy Currency vs. Bitcoin Currency vs. Fiat Currency vs. Gold Currency vs. Google Currency

Currency is a means to an end. The end is growth, happiness, knowledge, complexity, diversity, etc...
Currency is a mean to those ends because it allows people to collectively trade goods, i.e. through the use of currency I don't need to trade my engineering services directly with farmers in order to eat breakfast, lunch and dinner everyday. I can trade my engineering services with companies, who pay me in $dollars, and somewhere down the line, farmers trade their food for $dollars.
This is the most important purpose of a currency: a respected Medium of Exchange.

The other major purpose of a currency is not really a function of the currency as it is really a function of the investor. The question is: can the currency be invested into companies and/or banks so that one's investment in terms of real goods increases with time. In order words, the money invested into companies and banks should not have rapid fluctuations and it should increase with time, i.e. it should be a stable Storage of Wealth. This means that all sorts of growing companies need to accept payment of the currency. This also means that the currency must be safe from theft and that you can purchase stocks/bonds/homes using the currency with near-zero transaction fees.

As of the end of 2013, Bitcoin appears to satisfy only one of the two purposes of a currency: Medium of Exchange. (For more details on Bitcoin, check out the following YouTube video. It's the best summary of Bitcoin I've seen so far.) As far as a stable Storage of Wealth, Bitcoin has failed miserably...due to theft, price fluctuations, and arbitrary rules for increasing the number of Bitcoins in circulation. But this is not a complete problem, as long as you realize that you shouldn't be holding onto Bitcoins, but rather you should be investing your savings into projects with real, positive rates of return on investment, such as stocks and bonds. Where are the stable and growing Bitcoin-friendly banks, stocks and bonds?

I think that there are some novel aspects to Bitcoin as a currency, such as the innovative way of having all of the currency transactions recorded by the public and without a central organizing agency. I would like to see a currency like Bitcoin take off and become a global medium of exchange with near-zero friction (i.e. with near zero transaction fees.)

However, people who purchase Bitcoins should realize that there are some underlying problems with Bitcoin:
(1) There is currently an arbitrary limit to the number of Bitcoins. This expected limit is around 21 Million Bitcoins. (See graph below) The problem is that it's not clear what will be the incentive to secure the transactions (i.e. to mine Bitcoins) if there are no more Bitcoins available to be generated.


(2) To continue along point 1, the amount of currency should track the growth in the capability to generate useful work. New currency should be generated when self-replicating power plants are built, and not due to some arbitrary limit and not when gold/silver are mined out of the ground. I'd like to see an alternative to Bitcoin in which new currency is generated only when new power plants are built and only when people democratically vote to allow more currency (i.e. not just when Ben Bernanke says so.)

(3) Right now, new Bitcoins are instead generated when transactions occur, but there is no connection between the amount traded in transactions and the growth rate in useful work.

Wednesday, October 23, 2013

Highlights from the Last Few Weeks in Particle and Astro Physics News

It's been a roller-coaster month for many scientists. In the US, there was a government shut-down. And in the wider community, there have been a number of news article on some interesting, but inconclusive experimental finding. The goal of this post is to highlight the findings and give people links to the articles by Scientific American and New Scientist.

So, a list of some recent experimental findings:

(1) Dark Matter particles likely have rest mass between 8 keV and 14 keV
Horiuchi et al. recently published a paper that compares experimental measurements with dark matter theory suggests that the rest mass of dark matter particles is somewhere between 8 keV and 14 keV. Only in this range of values of the rest mass can the number of experimentally measured subhalo counts be predicted. (See Figure 2 and Table II from their paper below.) This appears to be strong experimental evidence against dark matter with rest mass values of MeV or GeV. I look forward to see more data collection and analysis along these lines.



Monday, October 14, 2013

The Hype this week from the National Ignition Facility

A number of blogs this week have been critical of the hype this week on BBC news that nuclear fusion researchers at the National Ignition Facility reached a "milestone."

The NIF is an exciting research facility because it allows us to understand nature better and because it helps us understand the D-T fusion process used for military applications. However, I'm afraid that what we've learned this week is that the science media (once again) goes after whatever can get hype.

So, let's be clear with what happened at NIF last month.

192 lasers generated photons that had 1.8 MJ of energy. (It should be pointed out that the NIF site consumed well more than 1.8 MJ of electricity to create the 1.8 MJ of photons. If this had been a semi-continuous event, the lasers would likely need at least 5 MJ to create the 1.8 MJ in photons.)

Of the 1.8 MJ in the photons, less than 14 kJ of energy reached the inside of the target as high energy X-rays. And 14 kJ of neutrons were generated from the reaction. Assuming that the neutrons are used to run a Rankine cycle power plant (same as for nuclear fission), then we are talking about roughly 5 kJ of electricity could be generated from the neutrons.

This means that the site spent ~5MJ of electricity to be able to perhaps obtain 5 kJ of electricity.

This means that NIF is three orders of magnitude away from "breakeven," and four orders of magnitude after from being "thermodynamically viable." This is far from a "milestone", and it is far away from what has already been achieved by the magnetically-confined fusion plasmas at JET.

I think that it's silly that NIF is trying to sell itself as an energy source of the future.

With that having been said, I want to point out that the idea of nuclear fusion is not a complete pipe dream. There is a possibly viable route to electricity production via magnetically-confined fusion plasmas, such as the still-being-built ITER experiment in Cadarache, France.

While this experiment is really expensive and there's still a chance that there's another plasma instability that will keep the system from reaching the real "breakeven" milestone (i.e. of generating more potential electricity from the neutrons than the electricity consumed to heat the plasma), I am proud that this facility is getting funding from world governments, including the US. The research at ITER is ground-breaking, and magnetically-confined fusion plasma is a potential energy source in the future if we can figure out how to control a few more of the instabilities have have appeared over the last ~60 years of research in this field.

I'd like to end this post by detailing some more information on some of the main engineering breakthroughs required before magnetically-confined fusion plasma can become "engineering" viable.

List of engineering breakthroughs required for magnetically-confined fusion plasma
(Also see slide 4 of the following presentation. The required engineering 'feats' or breakthroughs are well known. The required feats are all likely achievable...just really damn hard and require lots of upfront capital to do the research.)
(1) Controlling any instabilities that occur through alpha-heating  (i.e. there are likely to be instabilities due to the fact that the alpha particles emerge with energies on the order 4 MeV, but the core temperature of the plasma may only be 100's of keV.) The ability to control potential instabilities in nuclear fusion powered plasmas will be tested at ITER.
(2) Not-steady-state: Tokamak plasmas have a torodial electric field that must be applied by a time-varying magnetic field. This means that the process is inherently not-steady-state because you eventually need to change the direction of the electric field as you reach the maximum magnetic field that can be generated. This means that the plasma needs to be turned off (likely on a weekly/monthly basis), and then the current needs to be restarted in the opposite direction. An engineering 'feat' is required here to design a system that doesn't break during these scheduled start-ups / shut-downs (or a breakthrough is required in steady-state plasmas) and that isn't cost prohibitive. So far, the steady-state stellarators designs have been cost-prohibitive.
(3) The wall materials that can withstand high flux of ions, electrons, photons, and neutrons still need to be tested and proven to work. Also, the process for generating Tritium from Lithium needs to be demonstrated on a continuous basis.  (Note: there are plans to do this testing. I'm just point out that this has still been yet to demonstrated.)
(4) There are also a number of challenges associated with making cheap, super-conducting, high field magnets, with fueling the plasma, with removing heat, and with designing wall materials to withstand instabilities that release large amounts of energy to the wall while not releasing material from the wall that can end up cooling off the core of the plasma.

My overall conclusion (i.e. educated guess) is that magnetically-confined fusion plasma may be engineering-feasible sometime in the next 50 years, but it may not be economically competitive in the next 100 yrs. There's just too much uncertainty to known if magnetically-confined fusion will ever be economically viable against other sources of energy.

What can be stated with 99% certainty is that inertially-confined, laser-driven fusion is nowhere close to being engineering-viable or economically-viable. As a tax-payer in the US, I'd like to be able to vote for where my taxes goes. I would be willing to vote for magnetically-confined fusion plasma research, but I would not vote for my tax dollars to go to inertially-confined fusion research.

Sunday, October 6, 2013

A summary of why we need to globally reduce the emission of carbon dioxide into the atmosphere

What do coral reefs off of the coast of Australia, computer chip factories in Thailand, ski&snowboarding resort on the US east coast, and islands in the South Pacific all have in common? The answer is that all of these places are already feeling the negative impact of human-induced increases in the concentration of CO2 into the atmosphere.
The goal of this post is explain the science behind the effects of higher CO2 levels in the atmosphere, such as global warming, ocean acidification, and sea level rises. My hope is to explain in a somewhat less-technical manner the effects of higher CO2 concentrations in the atmosphere compared with the recent publication by the IPCC. There's nothing wrong with how the IPCC presents this information; it's just that I think that it's help for the information to be presented by the eyes of somebody who has no connection to those people who wrote the report or the papers cited in the report.
Unfortunately, the topic of CO2 emissions has become so politicized that the actual facts are easily swept under the rug of political ideology. Part of the problem is that environmental groups rarely discuss the actual science (and are quick to bash people who aren't alarmists), and the other part of the problem is clearly that there are people who refuse to accept that humans can affect the global climate, the ocean pH, or the sea level.  I consider myself a fairly moderate person and my goal here is to tell it as it is, regardless of how difficult it may or may not be to solve the problem of preventing major changes to Earth's climate, to Earth's average sea/ocean level, and Earth's average pH level in the seas/oceans.
So, before I get into the science, I'd like to state simply what the actual problem is that we face:
The problem:  Our global society is on pace to cause the temperature in Arctic and Antarctic to raise to the point at which we will likely see at least a 3 meter increase in sea levels. In addition, the higher concentration of CO2 in the atmosphere will cause lower pH levels in the ocean, which is harmful to major shell forming species, such as coral reefs. These are the straight-forward and indisbutable effects of higher concenrtations of CO2 in the atmosphere. There are also a number of other effects, of varying levels of certainty.

The Solution: The only realistic way to prevent major climate change, sea level change and pH change is to globally limit the emission of CO2 into the atmosphere. We can't "geo-engineer" our way out of this problem by throwing particulates into the atmosphere to scatter light from hitting the surface because this "solution" doesn't solve the fact that the pH of the ocean will continue to decrease if we were to continue to emit large amounts of CO2 into the atmosphere.