# Entropy and the arrow of time

Entropy as defined in Entropy as ignorance: information entropy depends only on the microscopic laws of motion, which as we have seen in Time and entropy are time-reversal invariant: this means that in equation Entropy as ignorance: information entropy is time independent and so strictly speaking the entropy of a system should never increase. To see that explicitly let us show that in general:

^{[1]}; since is a probability density it will vanish on

^{[2]}and so will , since we are supposing . Therefore the last integral is null:

*always*valid. This means that in principle if we consider a system undergoing an irreversible transformation, like the adiabatic expansion of a gas, its entropy remains constant; however we know that in such cases entropy always increases: where does this discrepancy come from?

What we want to show now is that this discrepancy comes from the fact that in reality *entropy is not a property of a given system, but of our knowledge about it*.
Let us first see this in a rather intuitive way: suppose we are computationally integrating the equations of motion of a system made of particles closed in a fixed volume and to choose very unusual initial conditions, for example we set the initial positions of the particles in only one half of the system (we are thus simulating the adiabatic expansion of a gas). We let the system evolve for some time, then we stop and invert all the velocities of the particles and then restart the integration; what we are doing is essentially equivalent to letting the system evolve for some time and then "rewind" it. We would therefore expect that as the system evolves the particles will come back to their initial conditions since we are just "rewinding" the process; however this doesn't occur and the gas evolves as a normal ideal gas.
This happens because computers have *finite* precision: the position and momentum of every particle is stored with a *fixed* number of significant figures, and as time passes *we are loosing information about the system* because the computer will discard many significant figures that (mathematically) should be present. In order to actually see the gas go back to its original configuration we would thus need a computer with *infinite* precision, which would not loose information.

Let us now see this concept in a more formal way.
When we consider a system from a microscopic point of view^{[3]} the probability density has complete information on the system since it will be of the form:

Consider now a situation where we have less information on the system: suppose for example we have a phase space probability density that carries no information about the momenta of the particles^{[4]}. Then as evolves it will satisfy a diffusion-like equation in configuration space^{[5]}:

*and this follows only from our lack of knowledge about the momenta of the particles*.

- ↑ Note: in momentum space this is always a surface at infinity, while in configuration space it can also be a finite surface depending on the properties of the system (obviously, if the particles can occupy a limited space then in configuration space will be finite).
- ↑ Intuitively, this can be justified from the fact that must be normalized: and this can happen only if tends to zero at infinity.
- ↑ Which is what we have done this in the proof of the fact that , since we have used Hamilton's equations.
- ↑ For example, this can me obtained from the previous probability density by integrating over the momenta and then renormalizing.
- ↑ In fact, as we have seen in The diffusion equation the diffusion and continuity equations are equivalent if : therefore since satisfies a continuity equation (being a probability density) then it will also satisfy a diffusion equation with diffusion constant .