Sunday, June 7, 2015

Constraints on Dark Matter & Dark Energy from the Hubble Expansion between z=0 and z~3

(Note: Updated on 6/19 to include Planck 2015 Best Fit for comparison)
Using data for the Hubble expansion as a function of time, this week I'm showing how data just on the Hubble Expansion between z=0 and z~3 alone can provide some tight constants on the amount of dark energy and dark matter in the universe.
First, I'd like to present the experimental data (without any fits to the data), and then I'll present the experimental data shown along with a "best fit equation" and with Planck's recent estimates. The z=0 to z=1.3 data shown below was found in Heavens et al. 2014 (which has references to where the original data was collected.) The data point at z=2.34 is from baryon acoustic oscillations (BAO) found in the Lyman-Alpha forest by the BOSS collaboration (Busca et al. 2013).


The figure above is a plot of the expansion of the Universe, H(z), as a function of time in the past. Here, I've plotted on the y-axis the Hubble expansion normalized by the Hubble Expansion Constant Today, and then squared. The x-axis is the inverse scale of the universe. A value of 4 means that linear dimensions in the universe would have been 4 times smaller. Also note that this is a log-log plot so that the data points near (1,1) are not scrunched together. One thing to note about the data is that there is a definite change in slope between the data near z=0 and the data at z>1.

Next, we'll discuss the theory behind why the Hubble expansion rate changes with time. We'll focus here on the case in which the total mass in the universe is equal to the critical density (=1). In that case, we can use Equation 2.18 of the Physical Cosmology Class Notes by Tom Theuns to determine how the Hubble expansion rate changes with time. This equation is listed below:

As seen above, if the only form of energy density in the universe were dark energy (Λ), then the Hubble expansion rate would be a constant. The data above is clearly not consistent with the case of only dark energy (i.e. a cosmological constant.) The other terms in the equations are: curvature (k), matter (m), and radiation (r). Radiation is defined as particles whose kinetic energy is greater than their rest mass energy. Matter is defined as particles whose kinetic energy is much less than their rest mass energy. Curvature is the the curvature of the universe.
If we were in a universe with only radiation, then the Hubble constant would decrease as (a0/a)4, which is the same function as how the radiation energy density decreases as the universe expands. If we were in a universe with only matter, then the Hubble constant would decrease as (a0/a)3, which is the same function as how the matter energy density decreases as the universe expands.

Next, I want to show that the best fit through the experimental data is a universe with approximately 30% matter and 70% dark energy (today.) I wanted to see what would be the best fit through this data (ignoring all other data...which also points to ~30%  matter and 70% dark energy.) So, in Excel, I created a quartic polynomial equation with 5 free variables (a+bx+cx2+dx3+ex4), and constrained the 5 free variables to sum to a value of 1 (i.e. to constrain the total mass to be equal to the critical density) and also constrained the free variables to be greater than zero. In this case, the best fit through the data was  (0.717, 0, 0, 0.283, 0). These values are pretty close to the values determined using Planck+BAO data. Interestingly, there is no sign of energy which would scale linearly, quadrically, or quarticly with (ao/a). This means that the best fit through the data is a world with only matter and dark energy.


Thursday, May 28, 2015

Update to Post on Neutrino Mixing: Visualizing CP violation

In this post, I'll be updating a graph I made last year in a post on the PMNS matrix.
The reason for the update is that there was a recent announcement by the T2K research group of a measurement of anti-muon-neutrinos converting into other anti-neutrino species. I'd like to first show a plot from their recent presentation in which they show the uncertainty in both the 2-3 mixing angle and the 2-3 mass difference.
As can be seen in their figure on Slide 48, the data is entirely consistent with the 2-3 mixing angle being the same for neutrinos as anti-neutrinos. This is a good sign that the mixing angle for anti-neutrinos is the same as for neutrinos; but note that this is also compatible with there being a CP violating phase. In fact, the best fit value for T2K data plus Particle Data Group 2014 data yields a value of the CP violating phase that is close to -90 degrees. What's interesting with these new estimates for the 4 parameters of the PMNS matrix is that (a) the summation of the angles is close to zero (within error) and (b) summation of each of thetas is close to 90 degrees (within error.)



T2K 2015 summary             
Angle (Radians) (Degrees) Sin2(θ) Sin2(2θ)
θ12 0.584 33.5 0.304 0.846
θ13 0.158 9.1 0.0248 0.097
θ23 0.812 46.5 0.527 0.997
δCP -1.55 -88.8
SUM 0.00 0.0

The PMNS corresponding to these values can be visualized using Wolfram Alpha by clicking on the site below:

The 1-sigma uncertainty around the δCP term is approximately -90 +/- 90 degrees. This means that there is a good chance that the value of  δCP is between -180 and 0 degrees, and this also means that there is a really good chance that the exp(i*δCP) is non-real, which means that there can be CP violation due to neutrino mixing.

So, using the data above, I've re-evaluated the eigenvalues and determinant of the PMNS maxtrix. I think that the eigenvalues way of viewing this data is better than listing the 4-parameters because one can visualize how close the eigenvalues are to the unit circle. If the eigenvalues fall on the unit circle, then the PMNS matrix is the neutrino mixing is complete (i.e. the mixing is only into these states.) If eigenvalues all fall outside the unit circle, then there is growth in the total number of neutrinos, and if the eigenvalues all fall within the unit circle, then there is decay in the total number of neutrinos.
As seen below, the eigenvalues fall very close to the unit circle (especially if including uncertainty... which is not shown in the figure below...the size of the markers does not correspond to uncertainty in the value of the eigenvalues.) Using previous data, the eigenvalues fall nearly exactly on the unit circle, whereas using the T2K2015+PDG2014 data, the eigenvalues fall slightly off of the unit circle (though, within 1-sigma uncertainty.) Interestingly, one of the eigenvalues is extremely close to 1+0i. The other two eigenvalues are close to the unit circle, but far away from 1+0i. The other thing to point out is that two eigenvalues far from 1+0i are not mirror images of each other on the unit circle. The fact that they are not mirror images is a sign that there is CP violation in the PMNS matrix. If there were no CP violation, one of these eigenvalues would be the complex conjugate of the other one:  a +/- bi. The other thing to point is that the value of the determinant is using the new data is nearlly entirely Real valued and slightly less than 1  (Det=0.9505+0.001i). This is likely a sign that the values of the 4-parameters were chosen by T2K in a way that is not consistent, but there is also still the possibility that the non-unitary value of the determinant is due to the fact that there is another type of neutrino that mixes with the three main species. (This is just speculation because, as can be seen from the old wiki data, the eigenvalues fall very close to the unit circle when chosen consistently.)


Old Wiki Data
Angle(Radians)
θ120.587
θ130.156
θ230.670
δCP-2.89
SUM-1.48

Tuesday, May 12, 2015

Summary of the Case of ~ 7keV Sterile Neutrinos as Dark Matter

A resonantly-produced-sterile-neutrino-minimal-extension to the SM is entirely consistent with all known particle physics and astrophysics data sets. Such  a SM-extension means that there could only be a small number of adjustments required to the StandardModel of Particle Physics (SM) in order for the SM to explain all astrophysics data sets, i.e. to explain dark matter, dark energy and inflation. What could such a RP-sterile-neutrino-SM-extension model look like:

(1) There are no new particle-classes to be discovered (other than nailing down the mass of the light, active neutrinos and the heavier, sterile neutrinos)

(2) Not counting spin degeneracy: There were likely 24 massless particles before the electro-weak transition. After this transition, some of the particles acquire mass, and the symmetry is broken into:
1 photon, 3 gauge bosons (W+ / W- / Z ) and 8 gluons   (12 Integer spin bosons in total)  
6 quarks, 6 leptons, i.e. electron / neutrinos   (12 Non-integer spin fermions in total)

Note that 24 is the number of symmetry operators in the permutation symmetry group S(4) and the classes within the S(4) symmetry group have sizes 1,3,8 (even permutations) and 6,6 (odd permutations.)  This is likely not a coincidence. Given that there are 4 (known) forces of nature and 4 (known) dimensions of spacetime, the S(4) symmetry group is likely to appear in nature. (See Hagedorn et al. 2006)

(3) Higgs scalar field is the inflaton field required to produce a universe with: (a) near zero curvature (i.e. flat after inflation), (b) Gaussian primordial fluctuations, (c) scalar tilt of ~0.965 for the fluctuations, (d) a near-zero running of the scalar tilt, (e) small, but near zero tensor fluctuations, and (f) no monopoles/knots.

(4) The sum of the rest mass of the squared of the SM bosons is equal to the sum of the rest mass of the squared of the SM fermions, and that the sum of these two is equal to rest mass squared equivalent energy of the Higgs Field. In other words, during the electro-weak transition in which some particles acquire mass via the Higgs mechanism, half of the rest mass squared energy does towards bosons (H, W, Z) and half goes to fermions (e, ve,  u, d, etc...). If this is the case, then there are constraints on the mass of any sterile neutrinos. In order to not effect this "sum of rest mass squared" calculation, the rest mass of any sterile neutrinos must be less than ~10 GeV. A keV sterile neutrino would have no effect on this "sum of rest mass squared" calculation.

So, it is entirely possible that there is no new physics (outside of the neutrino sector), provided that (a) the Higgs scalar field is the inflaton field, (b) sterile neutrinos are the dark matter particles, and (c) light active neutrinos are the cause of what we call dark energy. In the rest of this post, I summarize the case for ~7 keV resonantly-produced, sterile neutrinos as the main dark matter candidate. Note: some of these points, related to (b), can be found in the following papers by de Vega 2014 and Popa et al 2015.

Tuesday, May 5, 2015

Mysterious Cold Spot in the CMB: Still a mystery

Summary: A research group has recently suggested that a supervoid can explain the Cold Spot in the CMB. The problem is that a supervoid (via the ISW effect) can't explain the actual Planck TT data.

There has been a lot of attention over the last decade to a particularly large Cold Spot in the CMB, as seen both by WMAP and Planck (Image from this article.) Though, the Cold Spot is somewhat hard to see in the Planck data without a circle around it because there are so many "large-scale cold spots." The mystery behind the famed Cold Spot in the CMB is that the cold region is surrounded by a relatively hot region, and there is a difference of ~ 70 µK between the core of the cold spot and the surround region. Typical variations between locations these small is only 18 µK.


The two images directly above are from Planck 2015 results. The top of these two figures is the poliarization data, and the bottom of the top is the temperature data. Note that the scale goes from -300 µK to +300 µK.

While this new finding of a massive, supervoid of galaxies in the region near the Cold Spot is interesting, it should and has already been be pointed out that such a supervoid can't explain the ∼ -100 µK cold spot in the CMB via the standard ISW effect. As stated in the article "Can a supervoid explaint he Cold Spot?" by Nadathur et al., a supervoid is always disfavoured as an explanation compared with a random statistical fluctuation on the last scattering surface. There's just not enough of a void to explain the Cold Spot because the temperature would only be ∼  -20µK below the average temperature due to the late-time integrated Sachs-Wolfe effect (ISW.) Nadathur et al. state, "We have further shown that in order to produce ∆T ∼ −150 µK as seen at the Cold Spot location a void would need to be so large and so empty that within the standard ΛCDM framework the probability of its existence is essentially zero." The main argument against the supervoid-only explanation can be seen in the Figure by Seth Nadathur on his blog post regarding the paper he first-authored on this topic.

Wednesday, April 15, 2015

Repulsive keV Dark Matter

The case for 2-10 keV mass dark matter has gotten a lot stronger in 2015.
First, as mentioned in the previous post, the Planck 2015 results significantly lowered the value of the optical depth for photon-electron scattering during reionization and significantly lowered the z-value at which reionization occurred. Effectively, this pushes back the time at which the first stars and galaxies formed, and therefore indirectly suggests that dark matter took longer to clump together than predicted by GeV cold dark matter theories. As can be seen in the last figure in that previous post, a lower value of optical depth is possible for thermal relics with rest masses of ~2-3 keV and is incompatible (at 1-2 sigma) with CDM theories.

Second, just today, it was announced that there is a good chance that dark matter is actually self-repulsive. (Of course, it's been know indirectly for awhile that dark matter is self-repulsive because there is a missing core of dark matter in the center of galaxies...which can be explained by fermion repulsion between identical particles.) The news today is that there appears to be repulsion between the dark matter halos in a 'slow collision.'  This should be contrasted with the lack of repulsion when two dark matter halos (such as in the Bullet Cluster) collide in a 'fast collision.'

So how do we reconcile all of this information?  Actual, the answer is quite simple.
Dark matter halos are made of Fermi particles of keV rest mass that are quantum degenerate when their density is high and non-degenerate when their density is low.

When two fermi degenerate halos of uncharged-particles collide with velocities much greater than their Fermi velocity, the clusters of particle pass right through each other. Pfenniger & Muccione calculated what would happen in collisions between Fermi particles (or we could imagine two degenerate halos...doesn't matter provided that we are talking about a degenerate halo of particles or a single particle.)
To quote Pfenniger & Muccione:  "An interesting aspect developed for example by Huang (1964, chap. 10), is that the first quantum correction to a classical perfect gas... caused purely by the bosonic or fermionic nature of the particles is mathematically equivalent to an particle-particle interaction potential:

φ(r) = −kT ∙ ln [1 ± exp (−2π r2 /  λdB2 )] ."

When going from a two-particle collision to a collision between 2 halos, the main difference is that the deBroglie wavelenth of the particle would be replaced by the effective degeneracy radius of the halo.
When the directed velocity is large compared with the thermal velocity of the cluster, then the Fermi clusters pass right through each other. The center of mass is nearly same as if these were classical particles. In other words, there would be no separation between the center of the dark matter mass and the center of the solar matter mass.


In the next case below, the directed velocity of the two particles (or clusters) is decreased 3-fold. In this case, there is some slight repulsion between clusters. In this case, there would be a slight separate between the center of the dark matter mass and the center of the non-dark matter mass because the solar matter mass will pass through unaffected by the DM collision (unless there was actually a solar-solar collision...however unlikely.)



Finally, in the last case, the directed velocity of the two particles (or clusters) is decreased a further 3-fold. In this case, the particles have the time to interact, and can actually gravitationally coalesce and entangle.


This means that we should expect the "cross section of interaction" to depend greatly on how quickly the dark matter clusters are colliding.

Monday, March 23, 2015

Concordance Cosmology? Not yet

The term "Concordance Cosmology" gets thrown a round a lot in the field of cosmology. So too does the term "Precision Cosmology."
However, I'm a little hesitant to use these terms when we don't know what is 95% of the matter/energy in the universe. Cosmologists use the term  "Precision Cosmology" to describe the fact they can use data from a number of data sets to constraint variables, such as the rest mass of neutrinos, the spacetime curvature of the universe, or the number of neutrino species. However, many of these constraints are are only valid when assuming a certain, rather ad hoc model.

In many respects, this Standard Model of Cosmology,  i.e. Lambda CDM, is a great starting point, and most people who use it as a starting point are fully aware of its weakness and eagerly await being able to find corrections to the model. The problem is that it's sometimes referred to as if it were one complete consistent model (or referred to as a complete model once there's this small tweak over here or over there.) However, LCDM is not consistent and is rather ad hoc. The goal of this post is to poke holes in the idea that there is a "Standard Model of Cosmology" in the same sense that there's a "Standard Model of Particle Physics." (Note that the SM of particle physics is much closer to being a standard model...which the big exception being the lack of understanding of neutrino physics, i.e. how heavy are neutrinos and is there CP violation in the neutrino sector?)

So, let's begin with the issues with the Standard Model of Cosmology:  i.e. Lambda CDM:

(1) There is no mechanism for making more matter than anti-matter in Standard Model of Cosmology. The LCDM model starts off with an initial difference between matter and anti-matter. The physics required to make more matter than anti-matter is not in the model, and this data set (i.e. the value of the baryon and lepton excess fractions) is excluded when doing "Precision Cosmology."

(2) Cold Dark Matter is thrown in ad hoc. The mass of the dark matter particle is not in the model...it's just assumed to be some >GeV rest mass particle made in between the electro-weak transition and neutrino decoupling from the charged particles. The mechanism for making the cold dark matter is not consistent with the Standard Model of Particle Physics. So, it's interesting that the "Standard Model of Cosmology" so easily throws out the much more well known "Standard Model of Particle Physics." This means that there is no "Standard Model of Cosmo-Particle Physics."
There's also the fact that Cold Dark Matter over-predicts the number of satellite galaxies and over predicts the amount of dark matter in the center of galaxies. But once again, this data set is conveniently excluded when doing "Precision Cosmology" and, worse, the mass of the 'cold dark matter particle' is not even a free variable that Planck or other cosmology groups include in the "Standard Model of Cosmology." There are ten's of free variables that Planck uses to fit their data, but unfortunately, the mass of the dark matter particle is not one of the free variables.

(3) Dark Energy is a constant added to Einstein's General Theory of Relativity, and as such, it is completely ad hoc. The beauty of Einstein's General Theory of Relativity was its simplicity. Adding a constant to the theory destroys part of the simplicity of the theory.
It also appears that Dark Energy is not thermodynamically stable. (See the following article
http://arxiv.org/pdf/1501.03491v1.pdf)
So, this element of the "Standard Model of Cosmology" is an ad hoc constant added to GR that appears to not even be thermodynamically stable.

Wednesday, March 11, 2015

The Two-Step Method for Controlling Inflation and Maintaining Steady Growth Rates

Given that I've focused a lot of attention recently on Dark Matter & Dark Energy, I've decided to switch gears and get back to topics of Energy&Currency.

The recent crisis in Russia has demonstrated the problems with basing a currency on any one commodity. In the case of Russia, approximately 26.5% of its GDP comes from the sale of petroleum products. The contribution of oil/tax taxes to the Russian government is roughly 50% of the total revenue for the government, which means that Russia's currency is strongly impacted by changes in the price of oil/natural gas.
But not all oil/gas producing countries are feeling the shock of low gas prices. The key to avoiding the shocks is to make sure that a large portion of the revenue from oil/gas needs is invested into stocks/bonds of companies/governments that will benefit from lower oil/gas prices. So, it's fine for a oil/gas-producing country to be specialized in one area production, provided that its revenue goes into investments that will make money when oil/gas prices drop.

So, this leads me to the question I've been trying to answer for years:  how can a country maintain constant inflation rates while also maintain steady growth rates?

There are some bad options available: (a) Gold-based currency  (b) Fiat currency without rules (c) Any currency based by only one commodity...such a PertroDollars

Second, I'd like to discuss the problem with leaving the control of the currency to just a Federal Board of Bankers. For example, there is a famous economist at Stanford named John Taylor. (You can check out his blog here.) He is credited with inventing the Taylor Rule to determine how Federal Reserves should change the interest rate as a function of the inflation rate and the growth rate of the economy.
While I'm a proponent of making the Federal Reserve rule-based, there is a clear flaw in John Taylor's Rule for controlling inflation and GDP:
There are two measured, independent variables (inflation and real GDP growth), but only one controlled, dependent variable (the federal funds rate.)

As such, the Taylor rule is doomed to fail. You can't control the fate of two, independent variables by changing only one dependent variable. In order to control both the inflation rate and the real GPD growth rate, then you need two free variables. The focus of the rest of this blog is on how to use 2 input variables to control the 2  output variables (inflation and real GDP growth rate.)

You need two keys, and the keys should be held by different people.

Tuesday, March 10, 2015

Review of "A World Without Time" and an Argument against Time Travel (Neutrino Drag)

Over the holidays, I read a book written back in 2005 titled "A World Without Time."
It's a good read for over the holidays because the first half of the book is largely biographical chapters on Albert Einstein and Kurt Gödel, and the second half is a step-by-step presentation of  Godel's argument that, if the General Theory of Relativity is true, then our perceived "flow of time" is not real.

This is an interesting argument, so I'd like to discuss it further in this post. It's actually quite similar to the argument that has been made by Julian Barbour for the last couple of decades. (In this prior post, I discuss Dr. Barbour's latest addition to his last-standing argument that there is no such thing as time because 'time' in General Relativity is nothing more than another spatial dimension.)

So, let's look at Kurt Gödel's argument in more detail:
What Kurt Gödel did was to build a hypothetical universe that was consistent with GR. (While the universe was nothing like our universe, it was entirely consistent with the laws of GR.) In this hypothetical universe, there were closed space-time paths. In other words, there were closed space-time paths in the same way that the Earth has a 'essentially' closed space path around the Sun. In other words, on this closed space-time path you would wind up right back where you started. Meaning that you could revisit the past and it would be exactly the same as before,
Kurt Gödel then argued that in this hypothetical universe, there can be no such thing as 'flow of time' because you could easily go back in time or forward in time, just as easily as you can go left or go right at a T-intersection.
Kurt Gödel then argued that, since the 'flow of time' does not exist in this universe and since this universe is entirely consistent with General Relativity, then the 'flow of time' does not exist in our universe because our universe is governed by the laws of  General Relativity.

So, I think that this is a valid argument, except for the last step. The problem with this last step is that there are four laws of physics in our universe (gravity, E&M, weak nuclear, and strong nuclear.) I would agree with Kurt Gödel if the only law of physics had been gravity, but the weak nuclear force just doesn't cooperate so easily.

It's well known that the weak nuclear force violates both CP and T symmetry. In other words, there is an arrow of time associated with the weak nuclear force, and this arrow of time does not exist in the other forces of nature. So, what keeps us from ever being able to make a closed space-time path is the weak nuclear force, because we are ultimately surrounded by particles that interact via the weak nuclear force, i.e. neutrinos (and perhaps dark matter particles interact via the weak nuclear force.) While the interaction of space-ships with neutrinos is negligible at normal velocities, the interactions is extreme when the space-ship starts moving at relativistic velocities...i.e. having a directed energy per nucleon on the ship of around ~10 GeV.) For example, I calculated that, when traveling at a speed where your kinetic energy is equal to your rest mass energy, then the protons in your start converting into neutrons at a rate of 1 proton every 2 milliseconds. (While this isn't particularly fast given the number of protons in your body, you can hopefully see that there's no way for you to travel anywhere near the speed of light without having neutrinos significantly destroy the structure of your body.)
So, there's no way to get a space-ship up to the required velocity/energy to create a closed space-time loop without bumping into neutrinos who will create a drag force on the space-ship as they bump into electrons. The irreversibly of drag (due to collisions with particles that interact via the weak nuclear force) is what prevents us from creating a closed space-time path.

But what if there were no neutrinos for use to run into? Would time travel be possible? (i.e. would a closed space-time path be possible?)
I would answer that that time travel would be possible in a world without the weak nuclear force. However, we live in a world with the weak nuclear force, and there is no way to get around it. In fact, the real question is: are the background neutrinos a requirement of our time asymmetric world?

I don't think that it's coincidental that we are surrounded by neutrinos (the very particles that prevent us from traveling back in time.)  It's the neutrinos and dark matter particles that carry most of the entropy in the universe. It's the high-entropy, diffuse nature of neutrinos that pushes back against attempts to create closed space-time loops (just as it's impossible/difficult to create vortexes in extremely viscous liquids like honey.)


So, in summary, I think that Kurt Gödel has a valid point that there is no flow in General Relativity (alone.) But when you combine GR with the weak nuclear force, then space-time travel is not possible.


Wednesday, November 19, 2014

Dark Matter Decaying into Dark Energy

I was quite excited to see today that IOP PhysicsWorld had an article today on Dark Matter decaying into Dark Energy. The article discusses a recently accepted paper by Salvatelli et al. in PHYSICAL REVIEW LETTERS. The post is devoted this describing what's in this paper, and then discussing how this paper supports an idea that I wrote about in May 2014 (i.e. that what we call Dark Energy is really the difference in pressure exerted by light active neutrinos compared with the pressure exerted by the dark matter prior to it decaying into light active neutrinos.)

The gist of this recent PRL paper by Salvatelli et al is the following:  the tension between Planck's CMB data using a LambdaCDM model and many other data sources, such as Ho (Hubble constant at z=0) measurements by...you guessed it...the Hubble Space Telescope, can be resolved in a model in which dark matter decays into dark energy (but only when this interaction occurs after a redshift value of 0.9.) There has been a major problem reconciling the low value of Ho estimated by Planck's CMB data (Ho = 67.3 +/- 1.2)  with the much higher value measured by the Hubble Space Telescope (Ho = 73.8 +/- 2.4 .)

However, when using a model in which dark matter can decay into dark energy, and when using RSD data on the fluctuations of matter density (as a function of the redshift, z), then the Planck estimate of the Hubble constant at z=0 becomes Ho = 68.0 +/- 2.3. This new model eases the tension between the Planck data and the Hubble Space Telescope measurement of Ho.


So, let's go into the details of the model:
(1) Dark matter can decay into dark energy (or vice versa is also possible in the model)
(2) The interaction between dark matter and dark energy is labeled 'q' in their model. When 'q' is negative, then this means that dark matter can decay in dark energy. When 'q' is positive, then this means that dark energy can decay in dark matter.  And when  'q' is zero, then this is no interaction.
(3) The group has binned 'q' into a constant value over different periods of time.
Bin#1 is 2.5 <  z  < primordial epoch  (in other words, from the Big Bang until ~5 billion years after the Big Bang)
Bin#2 is 0.9  <  z  < 2.5  (in other words, from  ~5 billion years after the Big Bang to )
Bin#3 is 0.3  <  z  < 0.9
Bin#4 is 0.0  <  z  < 0.3   (i.e. most recent history)

The best fit values of these parameters are the following:  (See Table I and Fig 1 of their paper for the actual values)
q1 = -0.1 +/- 0.4   (in other words, q1 is well within 1 sigma away from zero)
q2 = -0.3 +0.25 - 0.1 (in other words, q2 is only roughly 1 sigma away from zero)
q3 = -0.5 +0.3 - 0.16 (in other words, q3 is roughly 2 sigma away from zero)
q4 = -0.9 +0.5 - 0.3 (in other words, q3 is roughly 2 sigma away from zero)

There is a trend that q(z) becomes more negative as z gets closer to its value today of z=0.

Thursday, November 6, 2014

Gravity alone does not explain the Arrow of Time

Not sure if you all have seen the recent article by Julian Barbour about an arrow of time arising from a purely gravitational system. If not, check out the following articles in Physics or Wired.
First off, the title of the articles contradict the substance of the articles.
Julian Barbour has shown that a system of 1000 objects interacting only via gravity can start dispersed, then clump together, and then disperse again. That's it. This is not exciting work. This was a similar problem to one that I was assigned in a freshman level computer programming class...just with ~100 objects rather than 1000 particles.

Second, Julian Barbour has shown that there is no arrow of time of for such systems, i.e. there is no way to tell the future from the past. (This is very different  than let's say 'life', which only runs in one direction. You are born, you remember the past, and you eventually die.)

As such, Julian Barbour has re-proven something that has been known for quite awhile:  In a system of particles that only interact via gravity, there is no arrow of time.

How can scientists and journalists mess this one up so badly?  Thoughts?


Tuesday, September 23, 2014

Sound and lots of Fury: Confirmation of Gravity Wave B Polarization?

It's been a busy few months with a lot about of Sound and Fury in the particle physics and astrophysics communities,  Sterile neutrinos: Dead?  Gravitational Waves: Dead?  TeV Dark Matter: Alive??? (Well...)

My goal in this post is to hopefully calm people down. and ask science journalists to be patient for more data before drawing any strong conclusions.
As such, given the recent article from the Planck discussing dust as a possible source of the B-polarizaed modes in the BICEP2 data, I wanted to remind people that there have been analyses done by other researchers (such as Prof Richard Gott of Princeton University and Dr. Colley formerly of Princeton University) who have mostly confirmed the results from the BICEP2 team. They estimate that the value of the tensor-to-scalar ratio, r, is 0.11+/-0.04 (and hence a detection of gravity waves with only 2sigma certainty. This is less than the 7sigma certainty that BICEP2 original suggested.)

To do so, they looked at the Gaussian/Non-Gaussian nature of the data, and they found that the BICEP2 data is extremely Gaussian (as it should be if it came from gravity waves during inflation...because before inflation many researchers expect that the universe was a giant Gaussian fluctuation of extremely hot stuff.) However, Gott&Colley showed that the dust data from Planck at 353 GHz was very non-Gaussian. Hence, most of the signal that the BICEP2 team measured can't be attributed to dust, and there is 2sigma certainty that the value of r is >0.

Note that Planck last year showed that the E and TT mode data in the CMB was almost entirely Gaussian, and they showed that the initial slope of the primordial density fluctuations vs. wavenumber can be explain by generic theories of inflation. (Note that there are a lot of different inflation theories...in order to rule out the different theories we need to know quantities such as the slope of the density fluctuations and the tensor-to-scalar ratio of the B-mode polarized waves in the CMB.)

So, Prof Gott and Dr. Colley showed that it's unlikely that dust is the cause of the B-mode signal in the BICEP2 data. But in addition to that, I want to point out dust can't explain the drop in signal that BICEP2 (and KECK preliminary) measured at a multipole value of l=50. If what BICEP2 had seen was due to dust, then the signal at l=50 would have been greater than the signal at l=100. As such, both the work by Planck and the work by Gott/Colley help to somewhat confirm the BICEP2 data, however in each case, it's important to note that the certainty has dropped from 7sigma to ~2sigma.

Since there is only likely 2 sigma confirmation of gravitational waves, we are left waiting for more data in order to cross over the 5sigma threshold. Waiting is no fun, but ever worse than waiting is spending a lot of time talking about nothing but Sound & Fury. This may take a few months or a few years to sort out...who knows?  Let's keep an open mind both ways.


Update: May 2015
Planck released it 2015 data set, and there are now some tight constrains on the tensor-to-scalar ratio, r. Note also that in most Theories of Inflation, you can only generate large values of r if you also generate large values of the "running of the scalar pertubations" with k.  And Planck data sets some really tight constraints on d (n_s) / d ln(k). So, it's very likely that the BICEP2 results are not from primordial gravitational waves. However, I think that "it's just dust" is still not a sufficient answer at this point in time. We still need to quantify:  how much is due to dust?

Wednesday, August 13, 2014

Department of (Zero Total) Energy

I think that we have to seriously consider the idea that the total energy content of the universe is zero. (While experimentally proving that it is exactly zero before inflation is nearly impossible, data coming from multiple data sets is all leading to the same conclusion: that the total energy content of the universe post-inflation is nearly zero and remains nearly zero throughout the rest of the history of the universe.) The image below show an possible energy content of the universe throughout its history (for a set of parameters close to today's values...expect for the radiation...which was make larger in order to make it easier to see. Note that this graph is from lecture#12 from Dr. Mark Whittle's Great Courses series titled Cosmology. I highly recommend watching it. It's infinitely better than the two Great Courses series by Dr. Sean Carroll.)


In the figure above, the kinetic energy (yellow) plus the gravitational energy (white) sum together to a value of zero (red) through time (x-axis.)

But wait a second. If the total energy content is and always will be zero, why do we say that there's an energy crisis? If the total energy content is zero and constant. How could there be a crisis?
Are millions of people worrying about a crisis that doesn't exist?

Tuesday, August 12, 2014

More evidence for 7.1 keV sterile neutrinos

I'm working on some other posts right now, but I wanted to make others aware of some more recent evidence that dark matter could be 7.1 keV sterile neutrinos.

First, Boyarsky et al. measured a signal at 3.55 keV within the Center of the Milky Way Galaxy that their model couldn't explain. The signal that they measured is much wider than can easily be explained by emission from Argon, Potassium or Chlorine ions. It's clear that this signal is not instrument noise because it doesn't show up in a blank sky scan and because it shows up at different values in galaxies with different z values (i.e. it's being redshifted when it comes from galaxies farther away from us.) Also, it's pretty clear that the signal is related to dark matter because the flux increases roughly linearly with the dark matter content of the galaxy.

A 7.1 keV mostly-sterile neutrino particle is an interesting dark matter particle because this rest mass falls right in the middle of the ~2-10 keV range of rest masses that is consistent with both data on Dark Matter Halos and Lyman Alpha Forest. (See post on The Case for keV Dark Matter.)

I also want to point out that this 7.1 keV sterile neutrino is not ruled out by cosmological data. For example, there is a recent paper by Vincent et al. that puts constraints on the mass&mixing angle for sterile neutrinos if they make up 100% of the dark matter (see Figure 2.) However, at 7.1 keV, the constrain is well above the value of mixing angle as measured by  Boyarsky et al. and Bulbul et al..

There is also another reason to be interested by a keV mass sterile neutrino (however, it should be noted that the details below are completely speculative):

Let's imagine that a  keV sterile neutrino could somehow decay into many, many light active neutrinos. (We can rule out the mechanism above in which 1 sterile neutrino decays into 1 light neutrino and 1 photon of half the energy of the sterile neutrino's rest mass because we need the sterile neutrino to decay into millions of active neutrinos.) However, knowing that a sterile neutrino can decay to a neutrino means that it is somewhat active in the weak nuclear force. This means that, if a black hole could consume sterile neutrinos, then it's possible that a black hole could eat dark matter (and regular matter) and spit out millions to billions of active neutrinos. We know that light active neutrinos can be ejected from the event horizon of a black hole. As such, it's possible that one heavy sterile neutrino of rest mass of ~keV could turn into billion of  active neutrinos of roughly micro-eV rest mass (provided that these active neutrinos have virtually no kinetic energy.) Also, if black holes can consume Fermi degenerate sterile neutrino dark matter, then this can provide a mechanism that allows super-massive black holes to form (and not run-away in size because the density of keV sterile neutrinos is limited in the center of galaxies because they would be limited by Fermi pressure.) This would help to explain the major question of how super massive black holes formed but did not consume their entire galaxy's dark matter. GeV cold dark matter would not solve the super massive black hole problem in physics.

If there are really ~7 million times more active light neutrinos today than the expected number of ~60 per cm3, then this is just the number of light active neutrinos required to provide a quantum degeneracy pressure of 7∙10-30 g/cm3. The density of light neutrinos would need to be 420,000,000 per  cm3 in order to reach this Fermi pressure, and this in turn would require a lot of matter and dark matter converting into light active neutrinos within super massive black holes. Well, this turns out to be nearly exactly the same energy density as dark energy today  (7∙10-30 g/cm3). In other words, if the actual density of nuetrinos is roughly 420 million per cm3 rather than 60 per cm3, then we could explain dark energy as the Fermi pressure supplied by quantum degenerate light neutrinos.

This means that, while highly improbably, it might be possible that dark energy is the quantum degeneracy pressure of light active neutrinos that have decayed from keV mostly-sterile neutrinos. This last part of about neutrinos as dark energy is still highly speculative; however, the information above about 7.1 keV sterile neutrino dark matter is starting to firm up. We'll have to await the launch of the Astro-H satellite in order to nail down whether the 3.55 keV emission signal is actually from the decay of sterile neutrinos.

Sunday, July 27, 2014

The Case for keV Dark Matter

As I've mentioned before in previous posts, the case for GeV Cold Dark Matter is becoming weaker and weaker every day. However, that doesn't seem to stop people who work in this field from defending their theories and attacking Warm Dark Matter.
That's fine, but for those of you who actually care about understanding how the universe works. We need to move on and actually analyze what the data is suggesting.
So, here's a list of what we know:

(1) Dark matter is real, and it's roughly 25% (+/-3%) of the universe. We can detect it "directly" through gravitational lensing and "indirectly" from the CMB spectra. Dark Matter is not an artifact due to Modified Newtonian Dynamics (i.e. MOND) because the location of dark matter is not 100% correlated with the location of normal matter. The Bullet Cluster is an excellent example of this, but there are many many more examples of this phenomena. When galaxies collide, the normal matter of doesn't always follow the dark matter and the dark matter doesn't immediately clump together. This is one of many signs that dark matter is not GeV rest mass particles, but rather is quantum degenerate fermions with rest mass in the low keV range.

(2) Dark Matter particles can't have a rest mass less than 1 eV (or else they would be relativistic when universe first de-ionized.) Because a keV dark matter particle is non-relativistic when electrons and protons recombine, a keV dark matter particle only affects the "effective number of Relativistic Particles" (i.e the Neff in the CMB) with a contribution of roughly 0.03. This is well within the error bars for Neff, which was measured by Planck+BAO to be 3.30+/-0.27. When you include data from Big Bang Nucleosynthesis, the allowed range for Neff remains pretty much the same. This means that a keV sterile neutrino is completely compatible with data from the Planck satellite to within 1 sigma uncertainty on Neff.

(3) If Dark Matter particles are Fermions, then their rest mass can't be less than ~ 2 keV because the mass density would be too low to explain experimental data on dark matter density in dwarf galaxies. (de Vega and Sanchez 2013)

(4) Using the Lyman Alpha Forest data in the early universe, there are constraints on the rest mass of dark matter particles. The exact cut-off depends on the allowed uncertainty (i.e. 1 sigma, 2 sigma, 3 sigma, 5 sigma) and on the model assumptions about the particle. The most recent best-fit-value I found for dark matter using Lyman Alpha Forest data was listed as 33 keV in Table II of Viel et al. 2013. The 1 sigma range was 8 keV to infinite rest mass (i.e. no constraint on the high end), and the 2 sigma range was 3.3 keV to infinite rest mass. What's interesting is that the best fit through the data was a keV rest mass dark matter particle...not a GeV rest mass dark matter particle. One way of explaining this is that a GeV dark matter particle would over-predict the Density Perturbations, whereas a 10-100 keV particle is a better fit through the data. For example, in the plot below of the Power Spectrum P(k) vs. wavenumber (k)  (where larger wavenumber means smaller length scales), the Cold Dark Matter line is well above the data points for the Lyman Alpha Forest (and this was known even back in 2002.) More recent data confirms that the data is better fit with a 33 keV particle than with a GeV scale particle.


The reason I find this funny is that the Lyman Alpha Forest had been used by proponents of GeV Dark Matter to fend off proponents of Warm Dark Matter. Ah, how the tides turn. While Lyman Alpha Forest Data can't rule out GeV Dark Matter, it is now suggesting that Dark Matter is Warm  (i.e. in the keV scale.)

(5) If a GeV Dark Matter particle obtains its mass from the Higgs Boson (like it appears that the tau lepton and the bottom quark do), then we can rule out the mass of the particle from ~ GeV to half the rest mass of the Higgs Boson. The reason is that the branching ratios of the Higgs Boson are proportional to the rest mass of the Fermion. In other words, we would have indirectly detected Dark Matter particles at CERN if they had rest masses on the order of ~1-62 GeV. In addition, if the dark matter particle were 10-1000 GeV we would have likely detected it in detectors looking for WIMPS. As such, the range 1-1000 GeV is effectively ruled out for dark matter particles. (See plot from Aad et al. in PRL 23 May 2014)

(6) GeV Dark Matter would clump together in the center of galaxies. There is nothing to stop GeV Dark Matter from clumping together. This is the well known "Cuspy Core Problem" of GeV dark matter, and it also shows up as a problem with estimating the number and size of dwarf galaxies.
What solves these problems and keeps dark matter from clumping is the Fermi Exclusion Principle, which states that only 1 Fermi particle can fill any position-momentum level. As mentioned above, the Fermi exclusion principle sets a lower limit of ~1-2 keV for dark matter in order to explain the actual density of dark matter in dward galaxies. But the principle also helps to explain why the density of dark matter is not cuspy in the center of galaxies, provided that the mass of the dark matter particle is in the range of 1-10 keV. Below is a comparison (from de Vega et al. 2014 that compares observational data for the density of dark matter in galaxies vs. theory for quantum degenerate dark matter with a rest mass around 2 keV.) Notice that the theory matches the observational data quite well at small radius. GeV dark matter would tend to clump up at the center and could in no way match the data. However, it should be noted that I was unable to determine after reading the entire paper what rest mass was actually used in the simulations. This is a major oversight on their part, and I hope that it gets corrected shortly. The point is that a ~2 keV dark matter particle does a pretty good job of reproducing the actual distribution of dark matter in a wide variety of different types of galaxies.


So, let me summary the points above:

Thursday, July 3, 2014

What is Dark Matter and Dark Energy?

In a  previous post, I provided a long explanation of what is Dark Matter and what is Dark Energy.
This post is a quick summary that I wrote in response to a post at Dispatches from Turtle Island titled "CDM looks better relative to WDM with better models."

Here's a quick summary of what is my best guess to explain dark matter/energy.

Dark matter = 2-10 keV rest-mass, mostly-sterile neutrino (spin 1/2),  quantum degenerate in galaxies which prevents the dark matter from clumping together

Dark energy = the quantum degeneracy pressure from ~0.01 eV rest-mass, active light neutrino (spin 1/2), most of these active neutrinos are made as the mostly-sterile neutrinos each decay into many light neutrinos.

Below is my comment:

Tuesday, July 1, 2014

Rosencrantz and Guildenstern Are Alive: The case for Edward de Vere

I've been taking a break from energy and physics, and delving into the topic that caused me to pick the pen name that I did for this blog: Eddie Devere.
Yes, this is a play off of the names  Eddie Vedder and Edward de Vere, two artists I admire greatly.
I was caused to delve into the Shake-Spear authorship question by a friendly email from Alan Tarica, who sent me a link to a website (Forgotten Secrets) he created in which all 154 of William Shake-Speare's Sonnets are available to read, along with Alan's comments. While there's a lot to read, Alan makes a very convincing case that the Sonnets are written by Edward de Vere, and that the sonnets are written to Queen Elizabeth and the Earl of Southhamption, who is likely the son of Edward de Vere and Queen Elizabeth.  While personally think that there's still some debate as to whether the Earl of Southhamption was the bastard child of Edward de Vere and Queen Elizabeth, I have virtually no doubt that  Edward de Vere used the pen name William Shake-Speare. The goal of this blog is to give a summary of the main arguments why Edward de Vere is the actual author of the sonnets, narrative poems, and plays that were written under the pen name William Shake-Spear.


Trying to determine who is the author of these sonnets, narrative poems, and plays is like going down the rabbit hole or getting stuck in the Matrix. It's easy to get lost in a world of Elizabethian politics, paranoia, and conspiracy theories. But let's not get stuck down in the rabbit's hole.
Let's ask ourselves one simple question: what do famous authors write about?  Answer: they write about what they know best.

What did James Joyce write about? what about Faulkner? Virginia Woolf? They wrote about what they knew best. Ireland, the South, and depressed women.

So, let's look at a few of the many possible authors of the Shake-spear collection:  Francis Bacon, William Shakspear, Edward de Vere, Queen Elizebeth, Christopher Marlowe, and Ben Johnson.

Now let's ask the question: what did Francis Bacon write about? He wrote about science and religion. His most famous text (Novum Organum) is a philosophical text about the methods of science, that is written in bullet format. It's pretty dry, just like Aristotle's lecture notes "The Organum", of which this text is based. Francis Bacon just didn't have the literary skills to write the Shake-spear collection, even though he might have had the education to have done so. 

Now what about William Shakspear? We don't know much about William Shakpeare of Stratford-upon-Avon. But one thing is abundantly clear. William Shakpeare of Stratford-upon-Avon was not capable of writing poems, let alone sign his name. William Shakpeare's will makes it abundantly clear that William Shakpeare is not a world famous playwright. Likely, what happened is that, after the death of William Shakpeare of Stratford-upon-Avon, the local church in  Stratford-upon-Avon tried to make it look like William Shakpeare of Stratford-upon-Avon was William Shake-speare, due to the similarity of the names and the fact that nobody else had stepped forward as the author of the poems and plays.

So, let's once again ask the question: what did Edward de Vere write about? Guess what!  Edward de Vere wrote poems about love and melancholy, and with a lot of references to Greek&Roman mythology. Here's a link to some of the poems. But that's not all. As detailed in a Front Line documentary make in 1989 of the Shake-spear question, it was well known at the time that Edward de Vere wrote under a pen name. (See the end of the following website for quotes from famous writers who list Edward de Vere as an excellent poem and playwright.)


A lot of authors write under pen names. Here's a wiki list of some of the famous ones. Some of the most famous include: Ben Franklin (Richard Saunders of the Poor Richard's Almanac), Mark Twain (Samual Langhorne Clemens), Pablo Neruda, Moliere, Lewis, Carroll, Mary Ann Evans (George Elliot), George Orwell (Eric Arthur Blair), Leslie McFarlane (of HardyBoys fame), J.K. Rowling, O. Henry, Isaac Asimov, V. Nabokov, Sylvia Plath, Soren Kierkegaard, Lemony Snicket, Woody Allen, and many, many more.

The assumption should be that the name on a book or play is a pen name, unless there is some direct proof that it's not a pen name. As such, there's no direct proof that William Shakspeare wrote the poems and plays of William Shake-speare. For example, we have no evidence that William Shakspeare could even write; we have no evidence in his will that he wrote poems/plays; and we have virtually no evidence from his original gravestone monument that he wrote plays/poems. (See image below.)


Saturday, June 21, 2014

Comparison of the Wealth of Nations: The 2014 Update

Yup, you guessed it. It’s that time of year again. BP just released their latest updates for the production and consumption of energy throughout the world. Before I get into the details of the analysis, I want to point out that there is one major change in my analysis compared with previous analyses that I’ve posted on this site. (These links go to the previous posts in 2011,  in 2012, and in 2013 on the Wealth of Nations.) The one change that I’ve done is that I’ve included a new form of useful work: coal consumption for non-power plant applications. In the developing world, ~90% of all coal is consumed in power plants. However, in places like China, the consumption of coal in power plants is only ~60% of total coal consumption. Therefore, in this update to the “Wealth of Nations” calculations, I’ve included a new term that takes 10% of the coal consumed for developed countries and 35% of the coal consumed for developing countries. This number is then multiplied by 10% to reflect the fact that the enthalpy content in the coal is typically only being converting into low-grade energy, whose exergy is only 10% of its enthalpy content. This is similar to the existing term I have for non-power-plant consumption of natural gas (i.e. NG for home-heating.) The main result of this additional term is that the useful work generation has increased in China by 14%, in India by 10% and in Russia by 3% compared with the useful work generation if this term were not included. If this term is included, then China’s useful work generation has been greater than the US’s useful work generation since 2011. In other words, China actually has had the world’s largest economy since ~2011.

Here are some other conclusions before I get into a detailed breakdown of the analysis for this year.

(1) The US economy (as measured in [TW-hrs] of useful electrical and mechanical work produced) increased by 1.8% in 2013 compared with 2012. This is much better than the -1.5% decrease in useful work output between 2012 and 2011.
(2) There were two countries with negative growth rates between 2012 and 2013: Japan (-2.0%) and the UK (-1.5%.) And there were two countries with near-zero growth rates: Germany (0.3%) and Russia (0.2%.) The major countries with the highest growth rates were: China (1.8%), India (4.4%), and Brazil (3.7%.)
(3) The purchasing power parity GDP (i.e. PPP GDP) is a pretty good reflection of the wealth of country, i.e. the capability to do mechanical and electrical work, when comparing developed economies (such as Germany, Japan, USA and UK.) However, the calculation of the GDP appears to be biased against a few countries, especially Canada and Russia, but also China and Brazil. I can understand why the IMF would be biased against Russia (i.e. black markets and collective farming likely aren’t being accurately reflected in the GDP calculation), but I still have no clue why the IMF and other world organizations consistently underestimate the size of Canada's economy. If I were a Canadian representative for the IMF, I would voice my concern that the IMF is underestimating the size of the Canadian economy by at least two fold.


So, now I'm going to present a more detailed breakdown of the analysis and present the data in graphical form. 

Wednesday, June 4, 2014

US CO2 Emission Reductions: A Good Start, but Much More is Needed Globally

As I've mentioned in a previous post, global emissions of CO2 are a major problem because the people who will be harmed the most of the higher temperatures and lower ocean pH are not those who are emitting the most CO2.
Before getting onto the main points of this post, I'm going to summarize the main points from that previous post (i.e. why CO2 emissions are a problem.) The reason I'm summarizing this is that I still have many family members who get their news from Fox News, and hence think that CO2 emissions is a good thing.    ;-{

(1) There is a clear link between CO2 levels in the atmosphere and fossil fuel combustion (due to the decrease in oxygen at the same time that CO2 is increasing and the change in the isotope ratios of carbon 13 to carbon 12 in the atmosphere.)
(2) There is a clear link between CO2 levels in the atmosphere and lower pH levels in the ocean (more CO2 means more acidic oceans, which in turn can lead to coral bleaching.)
(3) There is a clear link between CO2 levels in the atmosphere and less IR radiation leaving the atmosphere at the IR frequencies at which CO2 absorbs.)
(4) Since there is a partial overlap between the absorption frequencies for CO2 and H2O, the addition of CO2 into the atmosphere will have a greater effect on temperature in those locations where there is less water vapor.  (i.e. CO2 is fairly well mixed in the atmosphere, but water vapor concentration is highly dependent on local temperatures and relative humidity.)
(5) Predictions of models match well with experimental data. (meaning that temperatures are increasing the most in those locations where there wasn't much water vapor to start...i.e. the poles, deserts, and most other places in winter at night.)
Climate Model and Temperature Change
(6) All other possible causes of global warming have been debunked. (i.e. it's not the sun, it's not volcanoes, and it's not natural fluctuations...i.e. Milankovitch cycles.)

If you want a more depth summary of the case for why we need to significantly reduce CO2 emissions, please read the following articles from the website Skeptical Science. (which if you're not familiar, is a website devoted to debunking Climate Skeptics.)

Tuesday, May 20, 2014

Spacetime Expansion (i.e. Dark Energy) is due to the production of Quantum-Degenerate Active Neutrinos

I'd like to summary what I've been trying to put into words over the last few years at this site. This article is still in rough draft form, and I will likely be editing it over the next few weeks as I improve the main argument.

Dark energy is not actually actual energy. Dark energy is just the expansion of spacetime because matter (mostly keV sterile neutrinos dark matter) is slowly turning into active neutrinos, which are relativistic & quantum degenerate.

Assuming that the rest mass of lightest active  neutrino is 0.001 - 0.06 eV and assuming that their temperature right now is 2 Kelvin, then their de Broglie wavelength is between 0.3 mm and 2 mm.

Also, using estimates for the electron neutrino density of 60-200 per cubic cm,  the average spacing between electron neutrinos is between 1.7 and 2.5 mm. These two numbers are extremely close to each other, which means that the lightest neutrino is quantum Fermi-degenerate, i.e. you can't pack more into a region than given by their de Broglie wavelength cubed. (Just as you can't pack more electrons into a metal than its de Broglie wavelength cubed, without increasing its temperature.)

The pressure of a relativistic Fermi quantum gas is only a function of the number density of fermions. The pressure (in units of mass per volume) is proporitonal to planck's constant divided by the speed of light, all multipled by the number density to the 4/3rd power. By starting with the density of dark matter at the recombination time (z ~ 1100), and assuming that the number of light neutrinos that can be created from a heavy neutrino is equal to the ratio of their rest masses, then I estimate a degenerate pressure of 10^-30 grams per cubic cm. This number is surprisingly close to the current mass density of dark matter (~5*10^-30 grams per cubic cm) and is only a factor of 10 less than the required dark energy pressure of 10^-29 grams per cubic cm. This means that with a few(somewhat) minor tweaks to my calculations, I could derive the dark energy "pressure" from the equations of relativistic, quantum degenerate neutrinos. (Note: here's a link to an article saying that relativistic, quantum degenerate neutrinos can't be the source of dark energy because the pressure is way too low. However, in that article, they assume a rest mass of the neutrino of 0.55 eV and don't assume that neutrinos can be generated from dark matter. When you change the rest mass to ~0.01 eV and include other sources of neutrinos from the decay of dark matter into many light neutrinos, then the quantum degeneracy pressure of neutrinos is large enough to explain dark energy. To be clear, my argument rests on a still on proven statement:  that a ~2 keV sterile neutrino can slowly convert into ~10^5 active neutrinos of ~0.02 eV rest mass. If this statement is true, then we can explain why neutrinos have mass, what is dark matter, and what is dark energy.)

So, if dark matter can slowly turn into active neutrinos over time, then dark energy might just be the quantum degeneracy pressure of relativistic, quantum degenerate neutrinos. I'll continue in the rest of this post to make this argument stronger.

During the Big Bang, there would be a large amount of active neutrinos produced, and then some more would be produced as sterile neutrinos (i.e. dark matter) slowly converts/oscillates into light active neutrinos. Spacetime expands as active neutrinos are generated because it can't be any smaller than that which would be required to keep the de Broglie wavelength cubed times the number density less than 1.

As seen in the image below, there is stringy areas and clumpy areas. It is entirely possible that the stringy areas are regions in which the dark matter is mostly light neutrinos (formed after recombination via break down of heavier, sterile dark matter) and the clumpy areas (i.e. galaxies) are regions that mostly hold the keV sterile dark matter. What is keeping the whole universe from collapsing might be the quantum degenerate pressure of the lightest active neutrino.



Monday, May 19, 2014

What is the curvature of spacetime?

This post was updated on June 30 2014.
The astrophysicists community is currently in a heated debate about the implications of the BICEP2 measurements of B-mode waves in the cosmic microwave background (CMB.)
[For non-experts, the CMB is the nearly spatially-uniform (isotropic) radiation that we receive in whichever direction we look. The radiation matches with the radiation of a blackbody at a temperature of 2.726 /- 0.0013 Kelvin. This radiation is nearly uniform, with only small fluctuations. This near uniformity can be contrasted with the extreme density fluctuations we see in matter. Before the temperature of the universe cooled to below 3000 K (~0.3 eV), the density of hydrogen and helium in the universe was rather uniform because the hydrogen and helium were ionized (i.e. plasma), and in constant contact with the photons that today make up the CMB. Only after the temperature dropped below 3000 K, could the helium and hydrogen decouple from the radiation, and clump together to form local dense spots (which eventually turned into galaxies, stars, and planets.) It appears that the dark matter was already much more lumpy than photons and non-dark-matter at this point in time, so when the non-dark-matter decoupled from the photons, it started to fall into the local gravitational wells caused by the lumpy dark matter. (Note that my guess of why dark matter did not become extremely clumpy is that dark matter has a rest mass of ~2-10 keV and is prevented from being clumpy due to Fermi quantum degeneracy as neutrons are in neutron stars.)]