“The distinction between the past, present and future is only a stubbornly persistent illusion” ― Albert Einstein

Universe: A Dream reigning in the veins

Saturday, 27 August 2022

Mathematical Theory of Probability: A historical perspective from Pascal to Laplace

 

                                     3 dice rolling problem

Terminology

The term 'Probability' literally means chance or odds or expectation or likelihood. The term originates from the medieval Latin word 'probabilis' meaning plausible. Probability is always utilized to study the behavior of stochastic (random) processes like tossing a coin or rolling a dice. Historically probability has always been closely associated with the term chance and used synonymously with it until we had a proper mathematical perspective in the 18th century. The early form of the theory was called the 'Doctrine of Chances'.

Origin

Probability finds its place in the ancient and medieval laws of evidence where they had to deal with proofs, credibility, and uncertainties of evidence in a court. Games of chance are believed to have existed as early as the Egyptian civilization. In the excavations of different tombs of the Pharaohs, they found a game called 'Hounds and Jackals' which matches closely with our modern-day game of 'Snakes and Ladders.' This must have been the early stage of the creation of dice. Throwing around a set of dice and betting on its outcome has been an ancient habit of humans and this has been passed on from one civilization to another. The first dice game found in the literature of the Christian era was known as 'Hazard', played with 2-3 dice. The game was thought to have been brought to Europe by the knights returning from the crusades. The pottery of the Greek civilization showed the existence of some games which involved various degrees of uncertainty. Present-day casinos all other the world have successfully carried on the legend to modern times. 



During Renaissance, Europe was a center of gambling and other games of chance which involved a humongous amount of wealth put at stake. Maritime insurance plans were supposed to be estimated based on the risks involved. But during that time (17th century) there was no proper mathematical framework that could provide a logical basis to calculate or predict the degree of risks involved in these. People would always like to know an estimate of returns for the wealth they put at stake. These could only be provided by a sound mathematical theory that could take into account the randomness (unpredictability) of the systems and thoroughly perform a risk analysis. Under such situations, the stage was set for the mathematicians of that era to step forward and start developing a mathematical framework that would fulfill the needs of the prevailing society.





The early form of the theory

The early form of the mathematical theory of probability can be attributed to four mathematicians of that era, namely, the Italian polymath Gerolamo Cardano, the French mathematicians Pierre de Fermat and Blaise Pascal, and the Dutch mathematician Christiaan Huygens. Cardano began his investigations as early as 1560, but his work was unknown to society for 100 years. He basically put his thoughts into investigating the sum of the numbers obtained from the throws of three dice. The randomness involved amazed him and he tried to find a pattern in it. This was such a booming topic in those days that Galileo could not stay away from it. In the early 17th century he considered the problem of throwing 3 dice and declared that it is possible to throw some numbers more often than others because there are more ways to create that number.  From the middle of the 17th century, there began a correspondence between Fermat and Pascal aiming to find a solution for the games of chance. This triggered a serious attempt towards the development of a mathematical basis of probability. In 1657 Huygens gave a comprehensive treatment to the concept. 

Subsequent developments

In the 18th century, the subject was taken up by the Swiss mathematician Jacob Bernoulli and the French mathematician Abraham De Moivre. In his Arcs Conjectandi (1713), Bernoulli derived the first version of the Law of Large Numbers (LLN), which states that the average of the results obtained from a large number of trials of a random experiment should be close to the expected value and the gap gradually decreases as the number of trails are increased. De Moivre in his Doctrine of Chances (1718) showed the method of calculating a wide range of complex probabilities.

By the 19th century, it was almost evident that the mathematical theory of probability is a powerful mathematical tool having a wide range of real-life applications. The randomness or uncertainties in various activities of human life and natural phenomenon can be well addressed by a well-formulated theory of probability. To re-affirm this idea German mathematician and physicist Gauss used the theory in astronomical studies and the predictions were great. From a few observational data, he determined the orbit of Ceres (a dwarf planet in the asteroid belt between Mars and Jupiter). He used the method of least squares to perform an error analysis to correct the errors in the observations, which became a routine analysis in astronomy thereafter. To do this he used the normal distribution of errors in his calculations, which is a probabilistic tool. In 1812 French scholar and polymath Laplace further developed the theory by introducing fundamental concepts of mathematical expectations which include the moment generating function, the method of least squares, and the testing of hypothesis. From here the mathematical theory of probability took a turn and slowly started to develop a bonding with the mathematical theory of statistics. It was understood that the two concepts are related and one cannot do without the other. 

                                       Dwarf planet Ceres

Probability in Physics

By the end of the 19th century, physics was gaining ground and was the leading science of the era. Classical mechanics developed by Newton, Lagrange, Hamilton, etc were no longer valid for the sub-atomic worlds. Science needed new physical theories to describe the observations. Who knew that the mathematical theory of probability will form the cornerstone of the new theories to come? It was found that the properties of gases like the temperature could only be expressed in terms of the motions of a large number of particles. This could not be done without the help of statistics as the number of particles that we are talking about here is huge. Ludwig Boltzmann and J. Willard Gibbs developed the field of Statistical Mechanics to address the problem, which involved the concepts of probabilities and statistics. 

                                    Gas particles


The laws followed by the sub-atomic (micro) particles were queer and totally different from the classical world. To address this issue Quantum Mechanics was developed in the 20th century by people like Max Planck, Albert Einstein, Niels Bohr, Werner Heisenberg, Erwin Schrodinger, Paul Dirac, Wolfgang Pauli, Richard Feynman, etc.  The basis of the modern quantum theory is the Uncertainty principle proposed by Heisenberg and it is built on the concepts of probability.





Probability in today's World

The twentieth century saw the mathematical theory of probability develop leaps and bounds. One of the basic problems of probability is finding a formal unambiguous definition of the mathematical origin. The theory is so realistic, obvious, and application-based that it was really tough to find a theoretical definition of probability. The classical definition was initially formed which was far from being sufficient and made way for the frequency definition. Finally, the frequency definition was replaced by the axiomatic definition given by the Russian mathematician Andrey Kolmogorov in 1933. The axiomatic definition is based on three axioms which are logical and accepted worldwide. It settled the long-standing disputes between mathematicians regarding the definition of probability. 

Probability and Statistics found a link and came together through the concept of hypothesis testing introduced by the Polish mathematician Jerzy Neyman and the British polymath R.A. Fisher. In modern times the concept of hypothesis testing is applied in various fields like biological and psychological experiments. It is used in the clinical trial of drugs and also in economics. Nowadays the idea of probability is used in concepts like the Markov process, Brownian motion, and other places where we have to deal with an aggregate of entities. Random fluctuations of stock markets are studied using probabilistic mathematical models to provide predictions for investors. So Mathematical finance has emerged as a new area of mathematics.








The modern era is an era of computer simulations, artificial intelligence, quantum computing, data science, etc. In almost all these areas the mathematical theory of probability plays a significant role. It is understandable that there is still a lot of room for development. Mathematicians all over the world work on stochastic models with the aim of improving the theory and increasing its applicability. We hope that a theory that developed from within the human society out of utmost necessity, will continue to develop and help humanity reach new heights in science and technology.




By

Prabir Rudra



Share:

Wednesday, 24 August 2022

Quantum Mechanics: Are we a particle or a wave or both!!!?? Two contradictory pictures of reality!!

 


I think I can safely say that nobody understands quantum mechanics-- Richard Feynman


The macro world that we know around us works on some simple set of rules and principles, that have been deeply inscribed in our intuition. When we push or pull an object, it tends to move. Throw a stone upwards and it returns to the Earth. Move towards a wall and try to walk through it and you cannot do it. These are familiar and accepted pictures of our day-to-day life. But as soon as we glance into the atomic and sub-atomic world (micro) the picture completely changes.  As more and more observations were made it was clear that these microparticles followed some laws which were really very queer when compared to our accepted laws of the classical world. By the end of the 19th century, it was quite clear that classical mechanics will not work at micro levels.

Black body radiation

It began with the problem of black body radiation. A perfectly black body is one that absorbs all the electromagnetic radiation that falls on it. It is a perfectly idealized system. The radiation spectrum of such a body could not be explained by the classical law of Rayleigh-Jeans. Max Planck adopted a mathematical trick to get a solution for this problem. He assumed that light is not a continuous wave, but is made up of discrete packets of energy called quanta. This means that energy can only exist as integral multiples of some small units of energy (quanta) and not in any arbitrary amount.

Planck himself was very confused about this and did not believe it, calling it an act of sheer desperation. But with the assumption, the equations worked perfectly. With this adjustment, Planck proposed the basic form of quantum theory in 1900. It took some time for the people to get adjusted to such ideas but slowly it did happen. In 1905, Albert Einstein discovered the photoelectric effect where he considered the discrete quanta of light as discrete particles called photons. He was awarded the Nobel prize in physics for this contribution in 1921. Planck's theory coupled with Einstein's photoelectric effect is considered as the "old quantum theory".

Dawn of the era of quanta

With the advent of this concept of energy quanta, it seemed that since the building blocks of matter follow this strange law, there is an obligation to explain the entire physical world based on this conceptualization. Using the concept of energy quanta Niels Bohr finally pulled off his model of the atomic structure, which is the accepted model till date. He argued that the electrons revolving around the nucleus in different orbits can possess energy, which are integral multiples of the energy quanta, and transition from one orbit to another is associated with absorption or liberation of such energy quanta. As time went by the scientists looked for a quantum description of the fundamental forces of nature like the electromagnetic force. Richard Feynman played the most significant role in quantizing the electromagnetic force through his theory of Quantum Electrodynamics (QED). Till now we are searching for a proper quantum description of the gravitational force, which has been termed the theory of quantum gravity. String theory, Loop quantum gravity, gravity's rainbow, etc. are some of the leading contenders, but none have been able to provide a proper flawless quantum picture of gravity.

Wave-particle duality: How it really got bizarre!! 

To describe the physical properties of sub-atomic particles it was seen that not only the accepted picture of classical mechanics failed but also our intuition regarding the very nature or identity of the particles needed a serious revision. This was evident when it was seen that in order to explain the physics of the micro world we needed to adopt a dual picture of wave and particle of all the matter present around us. It is actually a bizarre scenario where any matter can exist in both states depending on its state of being observed. To be more precise, a particle when not observed by the observer exists as a delocalized wave, but as soon as the observer lays his/her eyes on it, there is a complete collapse of the wave and it exhibits a pure particle nature. Equivalently it can be stated that when the properties of a particle are measured, there is a simultaneous collapse of the wave function. This is one of the fundamental features of the Copenhagen agreement of quantum mechanics between Niels Bohr and Werner Heisenberg. It was such an exotic and unbelievable concept, that Albert Einstein wrote, "It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality. Separately neither of them fully explains the phenomena of light, but together they do". He actually disliked the idea and said, "God does not play the dice with the universe"Although many scientists like Max Planck, Albert Einstein, Niels Bohr, Warner Heisenberg, Erwin Schrodinger, Arthur Compton, etc. were involved in the development of this concept, this idea is often attributed to the French physicist Louis De Broglie after he experimentally demonstrated the wave-like behavior of matter in 1927. De-Broglie was awarded the Nobel prize in physics for this effort in 1929.


      Niels Bohr involved in a discussion with Albert Einstein

Further Developments

As the wave-particle duality of matter gained more and more acceptance, a picture of uncertainty at the micro level hovered in front of the eyes of physicists. This is evident when we consider the delocalization of the particle in the form of a wave. Such uncertainties will bring the mathematical theory of probability into the picture since we are no longer dwelling in our well-known deterministic world of classical objects. In order to set up a proper mathematical theory of this interpretation, we needed a wave function and a principle that well defines the uncertainty at the quantum level. 

Werner Heisenberg, a young German theoretical physicist published his Uncertainty Principle in 1927, where he stated that the velocity and the position of a quantum particle cannot be measured simultaneously. He stated this mathematically through a set of inequalities asserting a fundamental limit to the accuracy with which certain pairs of physical quantities of a particle may be predicted from the initial conditions. Heisenberg also developed the matrix formulation of quantum mechanics. In 1932 he was awarded the Nobel prize in physics for "the creation of quantum mechanics".

Heisenberg's Uncertainty Relation



              Werner Heisenberg


Erwin Schrodinger was an Austrian-Irish physicist, who developed the wave mechanics of quantum mechanics. In 1926, Schrodinger published his famous wave equation that mathematically determines the wave function.  He firstly derived it for time-independent systems and showed that it gave the correct energy eigenvalues for a hydrogen-like atom. He later published the dynamic solution characterizing the time dependence of the wave function. With complex solutions to Schrodinger's wave equation, quantum mechanics shifted from real numbers to complex numbers. Schrodinger was very disturbed by the probabilistic interpretation of quantum theory and the entire matrix mechanics. To ridicule the Copenhagen interpretation of quantum mechanics, he conceived the famous thought experiment known as Schrodinger's cat paradox. He kept himself completely aloof from the uncertainty principle and probabilistic aspects of quantum theory, as he did not believe in them. Regarding this, he said, "I don't like it and I am sorry I ever had anything to do with it". Nevertheless, his wave equation is universally celebrated as one of the most important achievements of the twentieth century and created a revolution in most areas of quantum mechanics. He won the Nobel prize in physics in 1933 for his extensive work on quantum mechanics.

Paul Dirac was an English theoretical physicist who had a profound impact on the development of quantum mechanics through his theory of relativistic quantum mechanics. His greatest contribution was the formulation of the Dirac equation which describes the behavior of fermions. This was the very equation that predicted the existence of antimatter. He shared the 1933 Nobel prize in physics with Schrodinger for his contributions to quantum mechanics. He also played a significant role in the reconciliation of quantum mechanics with general relativity, which was Albert Einstein's dream till death.

The development of quantum mechanics continued throughout the twentieth century and is even continuing today. Richard Feynman, an American physicist, through his development of quantum electrodynamics made a significant contribution. It reconciled quantum mechanics with electromagnetism. He received the Nobel prize in physics in 1965 jointly with Julian Schwinger and Shinichiro Tomonaga. Feynman was very famous for his Feynman diagrams which is a pictorial representation scheme for mathematical expression describing the behavior of sub-atomic particles.



The fifth Solvay conference on physics was held in 1927. The conference was dedicated to various aspects of quantum theory. Considered to be the image with the highest IQ in history.

Implications of the theory

It is believed that the bizarre nature of the theory is due to our lack of enough knowledge about it. Some of the implications of quantum theory like quantum entanglement are really baffling!! Quantum entanglement says that two particles separated by large spatial distances can interact with each other and link together to attain similar properties. It is as if we have two copies of the same particle separated by a large spatial distance. Just amazing!! This concept was addressed in a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen, which came to be known as the EPR paradox. They considered this phenomenon to be an impossible event as it directly violated the causality principle.

Quantum teleportation is a procedure of transportation of quantum information from a sender to a receiver spatially separated from each other. Experimentally it has been seen that quantum teleportation is possible for quantum information in the form of photons, atoms, electrons, etc. But in the realm of teleportation, the actual challenge will be to transfer physical objects from one location to another location, which seems to be almost impossible at present.  

Quantum computing is an area of study where we study computer-based technologies using the principles of quantum mechanics. In this realm, we actually use the intricacies of quantum laws to solve very complicated classical problems conveniently. To facilitate these quantum computers have been developed which are far more complex and completely different from their classical counterparts. Problems with a high degree of complexity are efficiently solved using these machines, which use quantum laws as the basic ingredient.


Quantum computer

Quantum mechanics is the theory of nature working at the most fundamental level and it is obvious that nature will not unfold its secrets so easily. The theory is still considered to be in its infancy and we do not know how long we will have to wait to know enough so that we have the complete picture at the quantum level. Over the years we have looked towards quantizing various theories of physics by reconciling them with quantum mechanics. This is felt necessary because a quantum theory works at the fundamental level and all the other theories must obey these laws at the quantum level. Currently, the biggest puzzle of modern physics is to reconcile quantum mechanics with gravitation giving a satisfactory theory of quantum gravity. Many stalwarts including Einstein have worked towards achieving it, but we are yet to get a satisfactory result. The singularity inside a black hole is believed to be the perfect laboratory for testing quantum gravity. There have been significant advances in the field of black hole physics and so we are hopeful of achieving something fruitful in near future. Perhaps we are waiting for the next Heisenberg or Schrodinger or Dirac to come along and show us the path. 


By
Prabir Rudra







Share:

Friday, 19 August 2022

Tuesday, 16 August 2022

The curious case of Pluto


 

By

Prabir Rudra

Share:

Saturday, 6 August 2022

White dwarf: The death-bed of a star telling the story of its glorious past

 


A white dwarf is a compact stellar core remnant resulting from gravitational collapse. Dense electron degenerate matter fills up such astronomical objects. Since nuclear fusion ceases to take place in a white dwarf, it almost has no light of its own. But still, a white dwarf may appear faintly lit in the sky, due to the residual thermal energy that it carries forward from the fusion reaction during the star's lifetime. Sirius B at a distance of 8.6 lightyears is the nearest known white dwarf. Around 8 white dwarfs are known to us in the vicinity of our solar system.


Formation

When a star is on the verge of completing its life cycle, its thermonuclear fuel is completely depleted from the core. This results in the halt of the nuclear fusion occurring in its core and there is an imbalance of outward and inward forces in the star with the inward gravitational force dominating the outward thermonuclear force. So the star collapses due to its own mass, with its volume becoming smaller and smaller. This makes the inside material denser and denser. Now if the original star does not have enough mass (> 25 solar masses) to form a neutron star, the remnant generally goes on to form a white dwarf, which is the final evolutionary stage of such stars.

In the main sequence, a star fuses hydrogen to form helium, releasing an enormous amount of energy that counterbalances the inward gravitational attraction. But when all the hydrogen gets burned up, the star expands into a red giant phase, when it fuses helium into carbon and oxygen (true for low and medium mass stars). Now if the red giant cannot produce the necessary temperature (around 1 billion K) to fuse carbon, then the core will be filled with inert carbon and oxygen. After shedding off the outer layers, forming a planetary nebula, the final remnant core rich in inert carbon and oxygen will be the white dwarf. This is the most general scenario and most abundantly found in the universe. But in the case of some small stars, they are not able to fuse helium into carbon and oxygen, and so they form helium-rich white dwarfs.

Stellar Equilibrium

At the white dwarf stage, the further collapse of the star is hindered by the electron degeneracy pressure. But there is a maximum mass limit that can be supported by this degeneracy pressure. This limit is called the Chandrasekhar limit which is approximately equal to 1.44 solar masses. So if a white dwarf accumulates enough matter from its companion star to go past the Chandrasekhar limit, then the electron degeneracy pressure can no longer hinder further gravitational collapse. The resulting white dwarf explodes into a very violent and bright event known as a supernova and the core further collapses towards the formation of a neutron star.


Transformation into a black dwarf and the final fate

Since there is no nuclear fusion going on in a white dwarf, it slowly radiates out whatever remnant heat it has over time and cools down. So initially white dwarfs are bluish-white (hot) due to the radiations emanating from them, but with time they become redder due to cooling. Finally, these dwarf stars are supposed to totally stop radiating and turn into black dwarfs. But the time taken for a white dwarf to turn into a black dwarf is calculated to be greater than the age of the universe, and hence it is thought that there are no such black dwarfs existing in the universe today. The oldest white dwarfs exiting today still radiate out the heat of a few thousand kelvins, which is such a mind-blowing fact!! Even if a black dwarf forms, it will be a really challenging task to locate it due to its non-radiating nature. 

No matter how dark, cool or insignificant they are, such entities will continue to roam around in the eternal space and tell the stories of their glorious past!! Such death beds of once fierce and powerful stars will keep whispering through the ages!!

By
Prabir Rudra

Share:

Wednesday, 3 August 2022

Neutron Star : The most extreme and violent object in the Universe


 

When a massive supergiant star (around 10 to 30 solar masses or more) undergoes gravitational collapse due to its own mass it forms a highly compact core called the Neutron star. Except for black holes and some other hypothetical objects (white holes, strange stars), neutron stars are the densest and smallest stellar objects known to us. The average radius of a neutron star is around 10 km and its mass is around 1.4 solar masses.

Formation


When a star uses up all its thermonuclear fuel at the end of its lifetime it undergoes a gravitational collapse due to its own mass. If the star is not massive enough the collapse stops at an intermediate stage known as a white dwarf. In this stage, further collapse is hindered by the electron degeneracy pressure. Due to the extreme gravity of the compact object, it starts to accrete matter from the surrounding, until its mass becomes 1.4 solar masses. This is the Chandrasekhar limit, beyond which the electron degeneracy pressure is no longer able to restrict further collapse. So the core further collapses until its density is comparable to the atomic nuclei. This collapse is associated with a supernova explosion, which generates the enormous amount of energy required to compress the core beyond the white dwarf stage. This stage is called the neutron star and further collapse at this stage is hindered by the neutron degeneracy pressure and repulsive nuclear forces. The core of the neutron star is primarily made of iron which is formed as a result of nuclear fusion of the lighter elements. Since iron core cannot be further fused neutron star stops burning and cools down with time.

Properties

Neutron stars may or may not be a stable configuration depending on various conditions. They may further undergo collapse by accreting matter and go on to form singularities like black holes. They may undergo collision with other astronomical bodies to produce the most violent events in the universe. Once a neutron star forms, it no longer generates heat, and gradually cools down with time. However, the neutron stars that are observable from the earth have surface temperatures of around 600000 K. The magnetic fields possessed by the neutron stars are between 100 million to 1 quadrillion times that of the Earth's magnetic field. The most striking feature of a neutron star is its extreme density. For reference, a matchbox containing neutron star material will weigh around 3 billion tonnes. The gravitational field at the surface of a neutron star is about 200 billion times that of the Earth's gravitational field.

As a star collapses its rotation speed increases, which is necessary for the conservation of angular momentum. So the neutron stars rotate at an extreme speed ranging up to several hundred times per second. Some neutron stars emit electromagnetic radiation, as they rotate. These are called Pulsars. Indeed it was the discovery of pulsars by Jocelyn Bell Burnell in 1967, that indicated the existence of extreme compact objects like the neutron stars. Neutron stars with very strong magnetic fields are known as magnetars.

Most neutron stars are unknown to us since they are cold and non-radiating in nature. However, it is estimated that there are around 1 billion neutron stars in the Milky Way. This can be estimated from the number of supernova explosions that have occurred since the formation of a neutron star is always accompanied by an extreme event like the supernova explosion. That is why these are the most extreme objects in the universe!!

Two neutron stars can form a binary system and accrete mass which makes the system bright in X-rays. Sometimes the binary system can go for a merger, and such neutron star mergers result in gamma-ray bursts, acting as strong sources of gravitational waves. Such a direct detection was made in 2017. Finally, if a neutron star accumulates a huge amount of mass through accretion, then the repulsive nuclear forces are no longer able to restrict further collapse of the core. Under such circumstances, the iron core of a neutron star further collapses all the way to a singularity, which may be a black hole.


by

Prabir Rudra
Share:

Special Theory of Relativity (Meme)





by 
Prabir Rudra

 

Share:

Red dwarfs: the immortal entities of the universe





Red dwarfs are small-sized stars of around 0.2 solar masses that burn by converting their hydrogen to helium through nuclear fusion. The rate of burning is so slow that their estimated life span is of the order of trillions of years. 

Since our universe is just 13.8 billion years old it has not seen the death of any red dwarf. Proxima Centauri, the nearest star to the sun is a red dwarf. How amazing is that!!

In fact, fifty out of the sixty closest stars are all red dwarfs. Due to their low luminosity, they are not easily visible. Interestingly not a single red dwarf is visible from Earth with the naked eye. Red dwarfs along with being the smallest kind of star in the main sequence are also the coolest kind. The coolest red dwarfs can have temperatures around 2000 K. The smallest red dwarfs can have radii around 9% of that of the sun with a mass around 7.5% of the solar mass.


by

Prabir Rudra

Share:

Followers

Subscribe

Enter your email address:

Delivered by FeedBurner

Powered by Blogger.