What are the cosmological models of the Universe? Alternative cosmology Model of the expanding Universe.

Useful tips 20.11.2023
Useful tips

In 1917, A. Einstein built a model of the Universe. In this model, a cosmological repulsive force called the lambda parameter was used to overcome the gravitational instability of the Universe. Later, Einstein would say that this was his gravest mistake, contrary to the spirit of the theory of relativity he created: the force of gravity in this theory is identified with the curvature of space-time. Einstein's Universe had the shape of a hypercylinder, the extent of which was determined by the total number and composition of forms of energy manifestation (matter, field, radiation, vacuum) in this cylinder. Time in this model is directed from the infinite past to the infinite future. Thus, here the amount of energy and mass of the Universe (matter, field, radiation, vacuum) is proportionally related to its spatial structure: limited in its shape, but of infinite radius and infinite in time.

Researchers who began to analyze this model noticed

to its extreme instability, similar to a coin standing on its edge, one side of which corresponds to the expanding Universe, the other to the closed one: when taking into account some physical parameters of the Universe, according to Einstein’s model, it turns out to be eternally expanding, when taking others into account - closed. For example, the Dutch astronomer W. de Sitter, having assumed that time is curved in the same way as space in Einstein’s model, received a model of the Universe in which time completely stops in very distant objects.

A. Freedman,fAndhIR And mathematician of Petrograd University, publishedV1922 G. article« ABOUTcurvaturespace."IN it presented the results of studies of the general theory of relativity, which did not exclude the mathematical possibility of the existence of three models of the Universe: the model of the Universe in Euclidean space ( TO = 0); model with a coefficient equal to ( K> 0) and a model in the Lobachevsky - Bolyai space ( TO< 0).

In his calculations, A. Friedman proceeded from the position that the value and

The radius of the Universe is proportional to the amount of energy, matter and other

forms of its manifestation in the Universe as a whole. The mathematical conclusions of A. Friedman denied the need to introduce a cosmological repulsive force, since the general theory of relativity did not exclude the possibility of the existence of a model of the Universe in which the process of its expansion corresponds to a compression process associated with an increase in the density and pressure of the energy-matter that makes up the Universe (matter, field, radiation , vacuum). A. Friedman's conclusions raised doubts among many scientists and A. Einstein himself. Although already in 1908, the mathematician G. Minkowski, having given a geometric interpretation of the special theory of relativity, received a model of the Universe in which the curvature coefficient is zero ( TO = 0), i.e., a model of the Universe in Euclidean space.

N. Lobachevsky, the founder of non-Euclidean geometry, measured the angles of a triangle between stars distant from the Earth and discovered that the sum of the angles of a triangle is 180°, i.e., space in space is Euclidean. The observed Euclidean space of the Universe is one of the mysteries of modern cosmology. It is currently believed that the density of matter

in the Universe is 0.1-0.2 parts of the critical density. The critical density is approximately 2·10 -29 g/cm 3 . Having reached critical density, the Universe will begin to contract.

A. Friedman's model with "TO > 0" is the expanding Universe from the original

her state to which she must return again. In this model, the concept of the age of the Universe appeared: the presence of a previous state relative to what was observed at a certain moment.

Assuming that the mass of the entire Universe is equal to 5 10 2 1 solar masses, A.

Friedman calculated that the observable Universe was in a compressed state

according to the model " K > 0" approximately 10-12 billion years ago. After this, it began to expand, but this expansion will not be endless and after a certain time the Universe will contract again. A. Friedman refused to discuss the physics of the initial, compressed state of the Universe, since the laws of the microworld were not clear at that time. A. Friedman's mathematical conclusions were repeatedly checked and rechecked not only by A. Einstein, but also by other scientists. After a certain time, A. Einstein, in response to A. Friedman’s letter, acknowledged the correctness of these decisions and called A. Friedman “the first scientist to take the path of constructing relativistic models of the Universe.” Unfortunately, A. Friedman died early. In his person, science has lost a talented scientist.

As noted above, neither A. Friedman nor A. Einstein knew the data on the fact of the “scattering” of galaxies obtained by the American astronomer V. Slifer (1875-1969) in 1912. By 1925, he measured the speed of movement of several tens galaxies. Therefore, the cosmological ideas of A. Friedman were discussed mainly in theoretical terms. NOalready V 1929

G.Americanastronomer E. Hubble (1889-1953) With with help telescope with instruments spectrumAline analysisfromwing tAto callingewasheduheffect

"reddisplacement." The light coming from the galaxies he observed

shifted to the red part of the visible light color spectrum. This meant

that the observed galaxies are moving away, “scattering” from the observer.

The redshift effect is a special case of the Doppler effect. The Austrian scientist K. Doppler (1803-1853) discovered it in 1824. When the wave source moves away relative to the device that records the waves, the wavelength increases and becomes shorter when approaching a stationary wave receiver. In the case of light waves, long waves of light correspond to the red segment of the light spectrum (red - violet), short ones - to the violet segment. The “redshift” effect was used by E. Hubble to measure the distances to galaxies and the speed of their removal: if the “redshift” from the galaxy A, For example, painwe V two times, how from galaxies IN, then the distance to the galaxy A twice as much as before the galaxy IN.

E. Hubble found that all observed galaxies are moving away in all directions of the celestial sphere at a speed proportional to the distance to them: Vr = Hr, Where r - distance to the observed galaxy, measured in parsecs (1 ps is approximately equal to 3.1 10 1 6 m), Vr - the speed of movement of the observed galaxy, Η - Hubble's constant, or the coefficient of proportionality between the speed of a galaxy and its distance

from the observer. The celestial sphere is a concept that is used to describe objects in the starry sky with the naked eye. The ancients considered the celestial sphere to be a reality, on the inner side of which the stars were fixed. Calculating the value of this quantity, which later became known as the Hubble constant, E. Hubble came to the conclusion that it was approximately 500 km/(s Mpc). In other words, a piece of space of one million parsecs increases by 500 km in one second.

Formula Vr= Hr allows us to consider both the removal of galaxies and the reverse situation, the movement towards a certain initial position, the beginning of the “scattering” of galaxies in time. The reciprocal of the Hubble constant has the dimension of time: t(time) = r/Vr = 1/H. When value N, which was mentioned above, E. Hubble obtained the time for the start of the “scattering” of galaxies to be equal to 3 billion years, which caused him to doubt the relativity of the correctness of the value he calculated. Using the “red shift” effect, E. Hubble reached the most distant galaxies known at that time: the further away the galaxy, the lower its brightness perceived by us. This allowed E. Hubble to say that the formula Vr = HR expresses the observed fact of the expansion of the Universe, which was discussed in A. Friedman’s model. The astronomical research of E. Hubble began to be considered by a number of scientists as experimental confirmation of the correctness of A. Friedman's model of a non-stationary, expanding Universe.

Already in the 1930s, some scientists expressed doubts about the data

E. Hubble. For example, P. Dirac put forward a hypothesis about the natural reddening of light quanta due to their quantum nature and interaction with the electromagnetic fields of outer space. Others pointed out the theoretical inconsistency of the Hubble constant: why should the value of the Hubble constant be the same at every moment in time in the evolution of the Universe? This stable constancy of the Hubble constant suggests that the laws of the Universe known to us, operating in the Megagalaxy, are mandatory for the entire Universe as a whole. Perhaps, as critics of the Hubble constant say, there are some other laws that the Hubble constant will not comply with.

For example, they say, light can “redden” due to the influence of the interstellar (ISM) and intergalactic (IGM) medium, which can lengthen the wavelength of its movement to the observer. Another issue that gave rise to discussions in connection with the research of E. Hubble was the question of the assumption of the possibility of galaxies moving at speeds exceeding the speed of light. If this is possible, then these galaxies may disappear from our observation, since from the general theory of relativity no signals can be transmitted faster than light. Nevertheless, most scientists believe that the observations of E. Hubble established the fact of the expansion of the Universe.

The fact of expansion of galaxies does not mean expansion within the galaxies themselves, since their structural certainty is ensured by the action of internal gravitational forces.

E. Hubble's observations contributed to further discussion of A. Friedman's models. BelgianmonkAndastronomerAND.Lemetr(VneRhowlhalf past)centurypaidpay attentionAtiononsleblowingcircumstance:galaxy recessionmeansextensionspace,hence,Vpast

wasdecreasevolumeAndPlrelationsVesociety. Lemaitre called the initial density of the substance a proto-atom with a density of 10 9 3 g/cm 3, from which God created the world. From this model it follows that the concept of density of matter can be used to determine the limits of applicability of the concepts of space and time. At a density of 10 9 3 g/cm 3 the concepts of time and space lose their usual physical meaning. This model brought attention to the physical state with super-dense and super-hot physical parameters. In addition, models have been proposed pulsatingUniverse: The universe expands and contracts, but never reaches extreme limits. Pulsating Universe models place great emphasis on measuring the energy-matter density of the Universe. When a critical density limit is reached, the Universe expands or contracts. As a result, the term appeared "singulIrnoe"(lat. singularus - a separate, single) state in which density and temperature take on an infinite value. This line of research faced the problem of the “hidden mass” of the Universe. The fact is that the observed mass of the Universe does not coincide with its mass calculated on the basis of theoretical models.

Model"Bigexplosion." Our compatriot G. Gamow (1904-1968)

worked at Petrograd University and was familiar with cosmological ideas

A. Friedman. In 1934, he was sent on a business trip to the USA, where he remained until the end of his life. Under the influence of the cosmological ideas of A. Friedman, G. Gamow became interested in two problems:

1) the relative abundance of chemical elements in the Universe and 2) their origin. By the end of the first half of the twentieth century. There was a lively discussion about these problems: where heavy chemical elements can be formed if hydrogen (1 1 H) and helium (4 H) are the most abundant chemical elements in the Universe. G. Gamow suggested that chemical elements trace their history back to the very beginning of the expansion of the Universe.

ModelG.GamovanAcalledmodel"Bigexplosion",nOsheIt has

AndotherName:"A-B-D-theory". This title indicates the initial letters of the authors of the article (Alpher, Bethe, Gamow), which was published in 1948 and contained a model of the “hot Universe”, but the main idea of ​​this article belonged to G. Gamow.

Briefly about the essence of this model:

1. The “original beginning” of the Universe, according to Friedman’s model, was represented by a super-dense and super-hot state.

2. This state arose as a result of the previous compression of the entire material and energy component of the Universe.

3. This condition corresponded to an extremely small volume.

4. Energy-matter, having reached a certain limit of density and temperature in this state, exploded, a Big Bang occurred, which Gamow called

"Cosmological Big Bang".

5. We are talking about an unusual explosion.

6. The Big Bang gave a certain speed of movement to all fragments of the original physical state before the Big Bang.

7. Since the initial state was superhot, the expansion should preserve the remnants of this temperature in all directions of the expanding Universe.

8. The value of this residual temperature should be approximately the same at all points of the Universe.

This phenomenon was called relict (ancient), background radiation.

1953 G. Gamow calculated the wave temperature of the cosmic microwave background radiation. Him

it turned out to be 10 K. CMB radiation is microwave electromagnetic radiation.

In 1964, American specialists A. Penzias and R. Wilson accidentally discovered relict radiation. Having installed the antennas of the new radio telescope, they could not get rid of interference in the 7.8 cm range. This interference and noise came from space, identical in size and in all directions. Measurements of this background radiation gave a temperature of less than 10 K.

Thus, G. Gamow’s hypothesis about relict, background radiation was confirmed. In his works on the temperature of background radiation, G. Gamow used A. Friedman's formula, which expresses the dependence of changes in radiation density over time. In parabolic ( K> 0) models of the Universe. Friedman considered a state where radiation dominates the matter of an infinitely expanding Universe.

According to Gamow's model, there were two eras in the development of the Universe: a) the predominance of radiation (physical field) over matter;

b) the predominance of matter over radiation. In the initial period, radiation predominated over matter, then there was a time when their ratio was equal, and a period when matter began to predominate over radiation. Gamow determined the boundary between these eras - 78 million years.

At the end of the twentieth century. measuring microscopic changes in background radiation, which was called pockmarkedbYu, have led a number of researchers to argue that these ripples represent a change in density substancesAndenergyGIIV as a result of the action of gravitational forces on early stages of development Universe.

Model "InflyatsiOnnoyUniverse".

The term "inflation" (lat. "inflation") is interpreted as swelling. Two researchers A. Guth and P. Seinhardt proposed this model. In this model, the evolution of the Universe is accompanied by a gigantic swelling of the quantum vacuum: in 10 -30 s the size of the Universe increases by 10 50 times. Inflation is an adiabatic process. It is associated with cooling and the emergence of differences between the weak, electromagnetic and strong interactions. An analogy for the inflation of the Universe can be, roughly speaking, represented by the sudden crystallization of a supercooled liquid. Initially, the inflationary phase was considered as the “rebirth” of the Universe after the Big Bang. Currently, inflation models use the concept AndnflatonnOthfields. This is a hypothetical field (from the word “inflation”), in which, thanks to random fluctuations, a homogeneous configuration of this field with a size of more than 10 -33 cm was formed. From it came the expansion and heating of the Universe in which we live.

The description of events in the Universe based on the “Inflationary Universe” model completely coincides with the description based on the Big Bang model, starting from 10 -30 from the expansion. The inflation phase means that the observable Universe is only part of the Universe. In the textbook by T. Ya. Dubnischeva “Concepts of modern natural science” the following course of events is proposed according to the model of the “Inflationary Universe”:

1) t - 10 - 4 5 s. At this point, after the expansion of the Universe began, its radius was approximately 10 -50 cm. This event is unusual from the point of view of modern physics. It is assumed that it is preceded by events generated by the quantum effects of the inflaton field. This time is less than the time of the “Planck era” - 10 - 4 3 s. But this does not confuse the supporters of this model, who carry out calculations with a time of 10 -50 s;

2) t - approximately from 10 -43 to 10 -35 s - the era of the “Great Unification” or the unification of all forces of physical interaction;

3) t - approximately from 10 - 3 5 to 10 -5 - the fast part of the inflationary phase,

when the diameter of the Universe increased by 10 5 0 times. We are talking about the emergence and formation of an electron-quark medium;

4) t- approximately from 10 -5 to 10 5 s, first the retention of quarks in hadrons occurs, and then the formation of nuclei of future atoms, from which matter is subsequently formed.

From this model it follows that after one second from the beginning of the expansion of the Universe, the process of the emergence of matter, its separation from photons of electromagnetic interaction and the formation of protosuperclusters and protogalaxies occurs. Heating occurs as a result of the emergence of particles and antiparticles interacting with each other. This process is called annihilation (lat. nihil - nothing or transformation into nothing). The authors of the model believe that annihilation is asymmetric towards the formation of ordinary particles that make up our Universe. Thus, the main idea of ​​the “Inflationary Universe” model is to exclude the concept of

The “Big Bang” as a special, unusual, exceptional state in the evolution of the Universe. However, an equally unusual condition appears in this model. This is the state configurations andnflaton field. The age of the Universe in these models is estimated at 10-15 billion years.

The “inflationary model” and the “Big Bang” model provide an explanation for the observed heterogeneity of the Universe (density of matter condensation). In particular, it is believed that during the inflation of the Universe, cosmic inhomogeneities-textures arose as embryos of aggregates of matter, which later grew into galaxies and their clusters. This is evidenced by what was recorded in 1992. the deviation of the temperature of the cosmic microwave background radiation from its average value of 2.7 K is approximately 0.00003 K. Both models speak of a hot expanding Universe, on average homogeneous and isotropic with respect to the cosmic microwave background radiation. In the latter case, we mean the fact that the cosmic microwave background radiation is almost identical in all parts of the observable Universe in all directions from the observer.

There are alternatives to the Big Bang and Inflationary models.

Universe": models of the "Stationary Universe", "Cold Universe" and

"Self-consistent cosmology".

Model"StationaryUniverse." This model was developed in 1948. It was based on the principle of “cosmological constancy” of the Universe: not only should there not be a single allocated place in the Universe, but also not a single moment in time should be allocated. The authors of this model are G. Bondi, T. Gold and F. Hoyle, the latter a well-known author of popular books on cosmology. In one of his works he wrote:

“Every cloud, galaxy, every star, every atom had a beginning, but not the entire Universe, the Universe is something more than its parts, although this conclusion may seem unexpected.” This model assumes the presence in the Universe of an internal source, a reservoir of energy that maintains the density of its energy-matter at a “constant level that prevents the compression of the Universe.” For example, F. Hoyle argued that if one atom appeared in one bucket of space every 10 million years, then the density of energy, matter and radiation in the Universe as a whole would be constant. This model does not explain how atoms of chemical elements, matter, etc. arose.

d. The discovery of relict radiation, background radiation, greatly undermined the theoretical foundations of this model.

Model« ColdUniverseth». The model was proposed in the sixties

years of the last century by the Soviet astrophysicist Ya. Zeldovich. Comparison

theoretical values ​​of radiation density and temperature according to the model

The “Big Bang” with radio astronomy data allowed Ya. Zeldovich to put forward a hypothesis according to which the initial physical state of the Universe was a cold proton-electron gas with an admixture of neutrinos: for each proton there is one electron and one neutrino. The discovery of cosmic microwave background radiation, confirming the hypothesis of an initial hot state in the evolution of the Universe, led Zeldovich to abandon his own model of the “Cold Universe”. However, the idea of ​​calculating the relationship between the number of different types of particles and the abundance of chemical elements in the Universe turned out to be fruitful. In particular, it was found that the energy-matter density in the Universe coincides with the density of the cosmic microwave background radiation.

Model"UniverseVatom." This model states that there is in fact not one, but many Universes. The “Universe in an Atom” model is based on the concept of a closed world according to A. Friedman. A closed world is a region of the Universe in which the forces of attraction between its components are equal to the energy of their total mass. In this case, the external dimensions of such a Universe can be microscopic. From the point of view of an external observer, it will be a microscopic object, but from the point of view of an observer inside this Universe, everything looks different: its galaxies, stars, etc. These objects are called fReadmonov. Academician A. A. Markov hypothesized that there could be an unlimited number of Friedmons and they could be completely open, that is, they have an entrance to their world and an exit (connection) with other worlds. It turns out that there are many Universes, or, as Corresponding Member of the USSR Academy of Sciences I. S. Shklovsky called it in one of his works, - Metaverse.

The idea of ​​a multiplicity of Universes was expressed by A. Guth, one of the authors of the inflationary model of the Universe. In an inflating Universe, the formation of “aneurysms” (a medical term meaning protrusion of the walls of blood vessels) from the mother Universe is possible. According to this author, the creation of the Universe is quite possible. To do this you need to compress 10 kg of substance

to a size smaller than one quadrillionth of an elementary particle.

SELF-TEST QUESTIONS

1. “Big Bang” model.

2. Astronomical research by E. Hubble and their role in development

modern cosmology.

3. Relict, background radiation.

4. Model “Inflationary Universe”.

Model of the Universe. Stationary Universe. Contents Model of the Universe 1 Stationary Universe 2 Consequences 3 Field theory of elementary particles 4 Photon-neutrino interactions 5 Red shift 6 Conclusion 7 Models of the Universe In the 20th century, two cosmological theories competed - the theory of the expanding Universe (the initial state from which the Universe arose was so hot and dense, that only elementary particles and radiation could exist; then the universe expanded and cooled, forming stars and galaxies) and the theory of a stationary Universe (the Universe has always existed, the observed rarefaction of matter is compensated by its continuous creation). Stationary Universe Einstein used universal equations from general relativity and related the curvature of space-time to the matter of the Universe. He arbitrarily introduced “cosmic repulsion,” which was very small, but kept the Universe from contracting to a point. The theory of a stationary Universe does not deny the expansion of the Universe. Ideas of continuous creation of matter arose repeatedly. Thus, in 1948, a group of scientists at Cambridge University (G. Bond, T. Gold and F. Hoyle) came up with the hypothesis of a stationary Universe. It is quite possible that it is the creation of new matter that leads to the expansion of the Universe, and not vice versa. The general age of the stationary Universe is a meaningless concept. In order for the density in the Universe to remain constant, new particles must be constantly formed. The law of conservation of matter and energy applies only to final volumes, and since every 3 hydrogen atom created in 1 m is balanced by the same atom leaving this volume, the law of conservation is not violated. The conservation law can be verified only in a limited space. A supporter of this opinion, Swedish astrophysicist, Nobel Prize winner for 1970, H. Alphen, believes that interstellar space is filled with long “filaments” and other structures consisting of plasma. The forces that force plasma to form such figures also force it to form galaxies, stars and stellar systems. He believes that the Universe is expanding under the influence of energy that is released during the annihilation of particles and antiparticles, but this expansion occurs somewhat more slowly. Consequences Consequences of the research: 1) quasars have a small radiation power, and not several orders of magnitude higher than the radiation power of entire galaxies, as is commonly believed in modern cosmology; 2) in quasars, matter scatters at speeds up to light, and superluminal values ​​are obtained as a result of overestimating the size of the Universe. He sees the reason for the aging (reddening) of quanta in the gravitational shift in the frequency of radiation, which is proportional not to the distance to the light source, but to the square of the distance. In this case, the size of the visible part of the Universe is not 15 billion light years, but 5. Statements about the “final proof” of the hot origin of the Universe and the high-speed nature of the cosmological redshift are controversial. E. Hubble, who discovered the law of cosmological redshifts in 1929, in 1936 published the first observational evidence of the fallacy of ideas about the recession of galaxies. In particular, it has been established that empirical dependencies obtained from statistical processing of about one hundred catalogs of extragalactic objects are consistent with the original theoretical relationships derived on the basis of ideas about the stability of the Universe and the “aging” of photons. In general, they are in irreconcilable contradictions with the cosmological models of the Big Bang theory for any combination of parameters of these models. "...A thorough study of possible sources of errors shows that observations seem to be consistent with the ideas about the non-velocity nature of redshifts. ...In theory, the relativistic expansion of the Universe still continues, although observations do not allow us to establish the nature of the expansion. So, the exploration of space ended on a note of uncertainty, but that's how it should be. We are, by definition, in the very center of the observed region. We know our closest neighbors, perhaps, quite well. As the distance increases, our knowledge decreases, and decreases quickly. Ultimately, our capabilities are limited by the limits of our telescopes. And then we observe shadows and look for landmarks among measurement errors that are hardly more real. 2 The research will continue. Until the possibilities of the empirical approach have been exhausted, we should not plunge into the illusory world of speculative constructions. " (Hubble "The World of Nebulae", 1936) Field theory of elementary particles Currently, the field theory of elementary particles has established a mechanism for the loss of part of the energy by photons as they pass through the Universe, an alternative to the Doppler effect and the Big Bang hypothesis. - These are photon-neutrino interactions ignored by the standard model. Consequently, the red shift cannot be considered as evidence of the Big Bang and the red shift cannot be used to judge the speed of movement of distant objects. Thus, the idea of ​​a stationary Universe received unexpected support and therefore now cannot be discounted. Photon-neutrino interactions According to the field theory of elementary particles, the electron neutrino (like any other elementary particle) has a constant electric and magnetic field and an alternating electromagnetic field. According to classical electrodynamics, these electromagnetic fields will interact with other electromagnetic fields, including the electromagnetic field of a photon. Thus, the passage of a photon through an electron neutrino (ejected in gigantic quantities by stars) or its molecular compound (νe2) will not go unnoticed by the latter - even if it is a very weak change or decrease in the energy of the photon, but it will happen. And the more a photon encounters electron neutrinos or their molecular compounds on its path, the more energy it will lose and, accordingly, the stronger the red shift will be. It’s one thing when a photon flies in parallel with an electron neutrino (moving at about the speed of light) on the same course, when they were both emitted by the sun, and quite another thing when a photon collides with a resting electron neutrino, with a bound state of two electron neutrinos (νe2), or with an electron neutrino released by another star (moving in a different direction). The energy lost by a photon from interaction with an electron neutrino depends on the orientation of the spin of the electron neutrino, the trajectory along which the photon passes through the electron neutrino, as well as on the energy of the photon itself. This is not easy to calculate, but can be measured using spacecraft and lasers. 3 It should be noted that this interaction does not correspond to the standard model, since the latter endows the elementary particles participating in it with different types of fundamental interactions:  Neutrino - hypothetical weak interaction,  Photon - electromagnetic interaction. Therefore, the conclusion is made about the recession of galaxies on a one-sided interpretation of the redshift in favor of the Doppler effect. - In contrast to this, the field theory of elementary particles has established the presence of electromagnetic fields in all elementary particles, including such an elusive elementary particle as the electron neutrino. Consequently, the photon and the electron neutrino, having common electronic interactions, according to classical electrodynamics, should interact with each other, and the “aging of light” hypothesis has an ally - the field theory of elementary particles. And if we discard the standard model, which has already been proven to be wrong, then this automatically reduces the “Big Bang Theory” to the level of a simple hypothesis that contradicts the laws of nature. Red shift Over the centuries, different cosmological models have replaced each other, but it was considered absolutely unshakable that the Universe is infinite in time and space. The starry sky overhead was a symbol of eternity and immutability. But in 1929, based on observations of the spectra of galaxies, Edwin Hubble formulated his law, from which it follows that the Universe is expanding. It sounds like this: the speed at which galaxies are moving away increases in proportion to the distance to them: v = Hr where v is the speed at which the galaxy is moving away from us, r is the distance to it, and H is the Hubble constant. H= 70 km/(s Mpc). Hubble's law does not mean at all that our Galaxy is the center from which expansion occurs. An observer anywhere in the Universe will see the same picture: all galaxies are running away from each other. That is why they say that space itself is expanding. The expansion of the Universe is the greatest natural phenomenon known to mankind. The faster a galaxy moves away from us, the more the lines in its spectrum will be shifted towards red, according to the Doppler effect. 4 The effect is named after the Christian Andreas Doppler, who proposed the first known physical explanation for the phenomenon in 1842. The hypothesis was tested and confirmed for sound waves by the Dutch scientist Christoph Hendrik Diederik Buys' Ballot in 1845. Doppler correctly predicted that the phenomenon should apply to everyone waves, and in particular suggested that the varying colors of stars could be attributed to their motion relative to the Earth. This phenomenon is called “red shift” - a decrease in radiation frequencies observed for all distant sources (galaxies, quasars), indicating a dynamic distance of these sources from each other and, in particular, from our Galaxy, i.e. about the non-stationarity (expansion) of the Metagalaxy. A red shift is also observed in emissions of any other frequencies, for example in the radio range. The opposite effect, associated with higher frequencies, is called violet shift. Most often, the term “redshift” is used to refer to two phenomena - cosmological and gravitational. Cosmological redshift is the observed shift of spectral lines toward longer wavelengths from a distant cosmic source (such as a galaxy or quasar) in an expanding Universe, compared to the wavelength of the same lines measured from a stationary source. Redshift is also a measure of the time that elapses from the moment the universe begins to expand until the moment light is emitted in the galaxy. Thus, according to modern astronomical data, the very first galaxies formed at a time corresponding to redshift 5, that is, after about 1/15 of the current age of the Universe. This means that the light from these galaxies took approximately 8.5 billion years to reach us. Until the beginning of this century, scientists believed that the main objects in the Universe were motionless in relation to each other. Then in 1913, American astronomer West Melvin Slipher began studying the spectra of light coming from a dozen known nebulae and concluded that they were moving away from the earth at speeds reaching millions of miles per hour. How did Slifer come to such an amazing conclusion? Traditionally, astronomers have used spectrographic analysis to determine the chemical elements present in stars. The spectrum of light was known to be associated with certain elements, showing characteristic line patterns that served as a sort of calling card of the element. Slipher noticed that in the spectra of the galaxies he studied, the lines of certain elements were shifted towards the red end of the spectrum. This curious phenomenon was called "red shift". 5 Therefore, it is believed that the red shift for galaxies was first discovered by W. Slipher, and in 1929 E. Hubble discovered that the red shift for distant galaxies is greater than for nearby ones, and increases approximately in proportion to the distance (Hubble's law). Various explanations have been proposed for the observed shifts in spectral lines. Such, for example, is the hypothesis about the decay of light quanta over a period of millions and billions of years, during which the light of distant sources reaches an earthly observer; According to this hypothesis, during decay the energy decreases, which is associated with a change in the frequency of the radiation. However, this hypothesis is not supported by observations. In particular, the red shift in different parts of the spectrum of the same source, within the framework of the hypothesis, should be different. Meanwhile, all observational data indicate that the red shift does not depend on frequency. The relative change in frequency Z = (fo - f")/fo is absolutely the same for all radiation frequencies not only in the optical, but also in the radio range of a given source (fo is the frequency of a certain line of the source spectrum, f" is the frequency of the same line recorded by the receiver) . In the theory of relativity, the Doppler redshift is considered as the result of a slowdown in the flow of time in a moving frame of reference (the effect of the special theory of relativity). Photographing the spectra of faint (distant) sources to measure redshift, even using the largest instruments and sensitive photographic plates, requires favorable observing conditions and long exposures. For galaxies, displacements Z = 0.2 are confidently measured, corresponding to a speed V = 60,000 km/sec and a distance of over 1 billion pc. At such speeds and distances, Hubble's law is applicable in its simplest form (the error is about 10%, i.e. the same as the error in determining H). Quasars are on average a hundred times brighter than galaxies and, therefore, can be observed at distances ten times greater (if space is Euclidean). For quasars, Z = 2 and more are actually recorded. At displacements Z = 2, speed V = 240000 km/sec. It is believed that at such speeds specific cosmological effects are already taking place - not stationarity and curvature of space-time; in particular, the concept of a single unambiguous distance becomes inapplicable (one of the distances, the redshift distance, is obviously R = V/H = 4.5 billion ps). Thus, it is believed that the red shift indicates the expansion of the entire observable part of the Universe; this phenomenon is usually called the expansion of the (astronomical) Universe. The gravitational redshift is considered to be a consequence of the slowing down of the rate of time due to the gravitational field (the effect of general relativity). This phenomenon (also called the Einstein effect, the generalized Doppler effect) was predicted by A. Einstein in 1911, and has been observed since 1919, first in the radiation of the Sun, and then of some other stars. The gravitational redshift is usually characterized 6 by the conventional velocity V, calculated formally using the same formulas as in the cases of cosmological redshift. Conditional velocity values: for the Sun V = 0.6 km/sec, for the dense star Sirius V = 20 km/sec. In 1959, for the first time it was possible to measure the red shift due to the Earth's gravitational field, which is very small: V = 7.5 × 10^-5 cm/sec (Pound-Rebka experiment). In some cases (for example, during gravitational collapse), both types of redshift should be observed (as a net effect). The presence of redshift (z) in galaxies allows us to determine the distances to them with great accuracy using the formula: R=zc/H. Some quasars have a high redshift. Such objects move away at a speed close to the speed of light. Redshifts have been measured for hundreds of thousands of galaxies. The most distant of them are at a distance of 12 billion light years. The conclusion about the expansion of the Universe followed from Einstein’s general theory of relativity, but even Einstein himself initially perceived this with skepticism, since it was the idea of ​​progressive evolution, and there was a beginning in it, or as they say today, the moment of birth, which, of course, completely contradicted existing concepts of a Universe infinite in time and space. However, this idea has been confirmed by observation and is now generally accepted in the scientific world. In 1946, Georgy Gamow and his colleagues developed a physical hypothesis for the initial stage of expansion of the Universe (the theory of the hot Universe), which correctly explains the presence of chemical elements in it, in certain proportions, by their synthesis at very high temperatures and pressures. Therefore, according to Gamow’s theory, the beginning of the expansion of the Universe was called the “Big Bang”. At its core, this theory assumes that in the beginning all the matter in the Universe was concentrated inside an insignificantly small volume of infinitely high temperature and pressure. Then, according to the script, it exploded with monstrous force. This explosion created superheated ionized gas, or plasma. This plasma expanded uniformly until it cooled to the point where it became an ordinary gas. Within this cooling cloud of expanding gas, galaxies formed, and generations of stars were born within the galaxies. Then planets, such as our Earth, formed around the stars. But few people are aware of the fact that even from the most powerful telescopes it is impossible to actually see the movement of galaxies from us. The pictures we see are motionless, and scientists do not pretend to show their visible movement, even if observations continue for centuries. 7 So, to find out whether the Universe is expanding or not, it is necessary to consider the light and other types of radiation that reach us, crossing regions of interstellar space. The images formed from these emissions do not directly show the expansion of the Universe, but subtle features of the radiation have convinced scientists that this expansion is taking place. Scientists make the first assumption that the earth's laws of physics apply without change everywhere in the Universe. They then try to understand how processes that obey these laws produce the light they observe. To understand how scientists use this way to analyze light to conclude that the Universe is expanding, let's look into the history of astronomy and astrophysics. Astronomers, observing the heavens, have long noticed that in addition to individual stars and planets, there were many faintly luminous bodies in the sky. They called them "nebulae". It is a Latin word meaning "cloud" or "nebula". And later, with the development of their concept, these objects were called galaxies. Larger in size than the full moon and so dim that it is barely visible to the naked eye, the neighboring Andromeda galaxy appears. Early in this century, astronomers turned powerful new telescopes to this and other galaxies and discovered that they were vast islands of billions of stars. Entire clusters of galaxies have been discovered at long distances. Before the discovery of stars in Andromeda, it was thought that all celestial bodies were located within the boundaries of our galaxy. But due to the development of the concept and the discovery of other, more distant galaxies, everything changed. The size of the universe has expanded beyond comprehension. Having discovered the phenomenon of “red shift”, V. Slifer began to explain it by the Doppler effect, from which we can conclude that galaxies are moving away from us. This was the first big step towards the idea that the entire Universe is expanding. The Doppler effect is often explained using the example of a train whistle, which changes pitch as the train passes us. This phenomenon was first scientifically studied in 1842 by the Austrian physicist Christian Johann Doppler. He assumed that the intervals between sound waves emitted from an object moving towards the observer, compressed, raise the pitch of the sound. Likewise, the intervals between sound waves reaching an observer from a source moving away from him are lengthened, and thus the pitch of the sound is lowered. It was reported that Doppler tested this idea by placing trumpeters on a railway platform driven by a locomotive. The musicians, with perfect pitch, listened intently as the trumpeters passed them, and they confirmed Doppler's analysis. 8 Doppler predicted a similar effect for light waves. For light, an increase in wavelength corresponds to a shift towards the red end of the spectrum. Therefore, the spectral lines of an object moving away from the observer should shift towards the red end of the spectrum. Slifer chose the Doppler effect to interpret his observations of galaxies. He noticed the red shift and decided that galaxies must be moving away from us. Another step leading to the belief that the universe is expanding came in 1917, when Einstein published his theory of relativity. Before Einstein, scientists always assumed that space extended infinitely in all directions, and that the geometry of space was Euclidean and three-dimensional. But Einstein suggested that space could have a different geometry - a four-dimensional curved closed space-time. According to Einstein's theory, there are many forms that space can take. One of them is a closed space without boundaries, similar to the surface of a sphere; the other is a negatively curved space that extends infinitely in all directions. Einstein himself thought that the universe was static, and he adapted his equation to accommodate this. But, almost at the same time, the Danish astronomer William de Sitter found a solution to Einstein's equation, which predicted the rapid expansion of the Universe. This space geometry must change over time. De Sitter's work aroused interest among astronomers around the world. Among them is Edwin Hubble. He was present at the American Astronomical Society conference in 1914 when Slifer reported his seminal findings on the motion of galaxies. In 1928, at the famous Mount Wilson Observatory, Hubble set to work in an attempt to combine de Sitter's theory of an expanding universe with Cypher's observations of receding galaxies. Hubble reasoned something like this: In an expanding universe, you should expect galaxies to move away from each other. And, more distant galaxies will move away from each other faster. This would mean that from any point, including Earth, an observer should see all other galaxies moving away from him, and, on average, the more distant galaxies should be moving faster. Hubble thought that if this were true and actually observed, it would appear that there was a proportional relationship between the distance to a galaxy and the degree of redshift in its spectrum. He observed that the spectra of most galaxies are redshifted, and galaxies at greater distances from us have a greater redshift. Hubble didn't know how far away any given galaxy was from us, so he proposed using this idea: “We can begin to estimate the distances to the nearest stars using various methods. Then, step by step, we 9 can construct a “cosmic distance ladder” that will give us an estimate of the distances to some galaxies. If we can estimate the intrinsic brightness of galaxies, then we can find the ratio of the distance to an unknown galaxy to the distance to a known one by measuring the apparent brightness of the galaxy. This dependence obeys the inverse root law. We will not go into the details of the complex procedure used to justify the distance ladder here. Let us only note that this procedure includes many theoretical interpretations, in which there are many questionable places, and which have been subject to revision, often in unexpected places. This will appear as the story progresses." Hubble, using his method of approximating distances, substantiated the proportional relationship, now known as Hubble's law, between the magnitude of the redshift and the distance to the galaxy. He believed he had clearly shown that the most distant galaxies have the greatest redshifts and therefore move away from us the fastest. He accepted this as sufficient evidence that the Universe was expanding. Over time, this idea became so firmly established that astronomers began to apply it in reverse: If distance is proportional to redshift, then the distance to galaxies can simply be calculated from the measured redshift. But as we noted, Hubble distances are not determined by direct measurements of the distance to galaxies. On the contrary, they are obtained indirectly, from measurements of the apparent brightness of galaxies. Thus, the expanding universe model has two potential flaws: first, the brightness of celestial objects may depend on factors other than distance, and thus distances calculated from the apparent brightnesses of galaxies may be invalid; secondly, it is possible that the redshift is not related to speed. In fact, a number of astronomers argue that some redshifts are not caused by the Doppler effect. And there is still a question about the correctness of the concept of an expanding Universe. An astronomer who has questioned the interpretation that all redshifts are caused by the Doppler effect is Halton Arp. At Palomar he observed many examples of inconsistent redshifts that did not obey Hubble's law. Analyzing them, he suggested that redshifts in the general case could be caused by mechanisms other than the Doppler effect. This raises the question of why scientists interpret redshifts solely as the Doppler effect. It may be true that the 10 Doppler effect causes the red shift, but how can we know for sure that the red shift is caused by the Doppler effect? One of the main reasons for this conclusion is that, according to modern physics, redshift can only be caused by a powerful gravitational field, excluding the Doppler effect. If light moves against the gravitational field, it loses some of its energy and experiences a red shift. However, astronomers do not find this explanation acceptable for stars and galaxies because the gravitational field must be incredibly strong to cause the observed redshift. Arp reports that he found a high-redshift object in close proximity to another low-redshift object. According to the standard theory of an expanding universe, an object with a low redshift should be relatively closer to us, and an object with a high redshift should be further away. Thus, two objects that are close to each other should have approximately the same redshifts. However, Arp gives the following example: Spiral galaxy NGC 7603 is connected to a neighboring galaxy by a luminous bridge, and yet the neighboring galaxy has a redshift 8000 kilometers per second greater than the spiral galaxy. Judging by the difference in their redshifts, the galaxies should be at significant distances from each other, certainly the neighboring galaxy should be 478 million light years further away - this is already strange, since the two galaxies are close enough for physical contact. Comparing them, our Galaxy lags behind its nearest neighbor, the Andromeda Galaxy, by only 2 million light years. There are, of course, proponents of the standard view who strongly disagree with Arp's interpretation. They believe that objects are actually located far from each other, and their apparent proximity is only apparent. The so-called luminous bridge exists, but a more distant galaxy only happened to be behind the bridge along our line of sight. However, Arp notes a significant superficiality in the reasoning of opponents of his idea: “The galaxy they show is in any case unusual. The glowing bridge to the star is simply one of its normal spiral arms." However, in Arp's example, the bridge is an unusual structure, not the norm in such galaxies. The probability that two galaxies of these types will be located in such a configuration is much less than the probability that a Milky Way star will line up with an ordinary galaxy. Arp found many other examples that contradict the traditional understanding of redshift. Here is one of the most controversial discoveries. Quasar Makarian 205, near spiral galaxy NGC 4319, is visually connected to the galaxy through a luminous bridge. The galaxy has a redshift of 11,800 kilometers per second, corresponding to a distance of about 107 million light years. The quasar has a redshift of 21,000 kilometers per second, which would mean it is 1.24 billion light years away. But Arp suggested that these objects are definitely connected and this shows that the standard redshift interpretation is wrong in this case. (One may note, by the way, the fact that astronomers express redshift in kilometers per second. This shows their commitment to the idea that the redshift is explained by the Doppler effect.) Critics said that they did not find the connecting bridge shown in Arp's painting in the photograph of the galaxy NGC 4319. Others have reported that the bridge is a "fake photographic effect." But later Jack M. Sulentic of the University of Alabama made extensive photometric studies of the two objects and concluded that the link bridge was real. Another example of a controversial redshift noticed by Arp is the discovery of a highly unusual chain of galaxies called Vorontsov-Velyamov 172, after the Russian discoverer. In this chain of galaxies, the smaller, more compact member has a redshift of twice that of the others. In addition to a couple of galaxies with inconsistent redshifts, Arp noticed something even stranger - it turns out that quasars and galaxies can erupt other quasars and galaxies. Here are a few examples: The exploding galaxy NGC 520 has an apparently low redshift. Four faint quasars are located in a straight line, moving southeast of the galaxy. Arp proved that these faint quasars are the only ones in this region. Could it be a simple coincidence that they lined up almost in a line from the galaxy? Arp argued that such a chance was extremely small and suggested that quasars erupted from an exploding galaxy. Interestingly enough, quasars have a redshift much greater than the galaxy that appears to be their parent. Interestingly, according to standard redshift theory, quasars should be much further away than the galaxy. Arp interprets this and other similar examples to suggest that newly erupted quasars are born at high redshifts, and gradually, their redshifts decrease over time. Some scientists question whether it is realistic for a galaxy to erupt other massive objects, such as galaxies or quasars. In response, Arp points to a striking photograph of the giant galaxy M87 spewing out a stream of matter. When we look at the elliptical galaxies in the region around the M87 galaxy (also an elliptical type), we see that they are all falling in the direction of the erupting stream of matter. Astronomers suggest, as does Arp, that these galaxies were erupted from M87. 12 How can one galaxy emit another galaxy? If a galaxy is an "island universe" consisting of a vast aggregate of stars and gas, how can it emit another galaxy that is the same aggregate of stars and gas? It is likely that radio astronomy may provide a clue. Recently, radio astronomers have claimed that vast areas of radio emission may be erupted from galaxies. These emission regions exist in pairs on each side of some galaxies. To explain this, astronomers postulate the existence of giant rotating black holes at the center of the galaxy, which devour nearby stars and spit out matter in both directions along the axis of rotation. However, if Arp's analysis is correct, it not only explains regions of emission that may be composed of thin gas, but also the fact that the interior of a galaxy or precursor galaxies may be ejected. Returning to the redshifts of such ejected galaxies and quasars, Arp found the following: Erupted objects have a much higher redshift than their parent, although they are in close proximity to it. Arp explains this only by the fact that their redshifts are not caused by the Doppler effect. So what astronomers measure is not the speed at which an object is moving away. Most likely, the redshift is related to the real physical state of the object. However, the real laws of physics do not answer the question of what kind of state this could be. They still think that the galaxy consists of individual stars plus clouds of gas and dust. What qualities can it have to result in a redshift that is not caused by the Doppler effect or gravity? This cannot be explained in terms of known physical laws. Of course, Arp's findings are very controversial, and many astronomers doubt that such a connection between galaxies and quasars can really be real. But this is just one line of evidence suggesting that the standard interpretation of galaxy redshifts may be changing. Conclusion The Big Bang Hypothesis still remains an unproven assumption (or simply put, it is a fairy tale), and the idea of ​​a Stationary Universe needs further research. What theory will emerge next - time will tell. The universe is not as empty as it seems. In it there are processes of transformation and transfer of energy (including by the same neutrinos - invisible carriers of energy) and physics 13 has to understand, describe and explain all this, and not invent all sorts of plausible mathematical fairy tales. Now physics cannot say unambiguously what the real age of the Universe is and whether it can be measured somehow. But now it is absolutely clear that 13.7 billion years ago there was a universe, there were galaxies with stars in it, the stars had planets, some of the planets had life, some had intelligent life, and then thinking beings also wondered what the real age was. 14

Hypothesis of a multi-leaf model of the Universe

Preface by the site author: For the attention of readers of the site "Knowledge is Power" we offer fragments from the 29th chapter of Andrei Dmitrievich Sakharov's book "Memoirs". Academician Sakharov talks about the work in the field of cosmology, which he carried out after he began to actively engage in human rights activities - in particular, in Gorky’s exile. This material is of undoubted interest on the topic “The Universe”, discussed in this chapter of our site. We will get acquainted with the hypothesis of a multi-leaf model of the Universe and other problems of cosmology and physics. ...And, of course, let's remember our recent tragic past.

Academician Andrei Dmitrievich SAKHAROV (1921-1989).

In Moscow in the 70s and in Gorky, I continued my attempts to study physics and cosmology. During these years I was unable to put forward significantly new ideas, and I continued to develop those directions that were already presented in my works of the 60s (and described in the first part of this book). This is probably the lot of most scientists when they reach a certain age limit for them. However, I do not lose hope that perhaps something else will “shine” for me. At the same time, I must say that simply observing the scientific process, in which you yourself do not take part, but know what is what, brings deep inner joy. In this sense, I am “not greedy.”

In 1974, I did and in 1975 published a paper in which I developed the idea of ​​a zero Lagrangian of the gravitational field, as well as the calculation methods that I had used in previous works. At the same time, it turned out that I came to the method proposed many years ago by Vladimir Aleksandrovich Fok, and then by Julian Schwinger. However, my conclusion and the very path of construction, the methods were completely different. Unfortunately, I could not send my work to Fok - he died just then.

I subsequently discovered some errors in my article. It left unclarified the question of whether “induced gravity” (the modern term used instead of the term “zero Lagrangian”) gives the correct sign of the gravitational constant in any of the options that I considered.<...>

Three works - one published before my expulsion and two after my expulsion - are devoted to cosmological problems. In the first paper, I discuss the mechanisms of baryon asymmetry. Of some interest, perhaps, are general considerations about the kinetics of reactions leading to the baryon asymmetry of the Universe. However, specifically in this work, I reason within the framework of my old assumption about the existence of a “combined” conservation law (the sum of the numbers of quarks and leptons is conserved). I already wrote in the first part of my memoirs how I came to this idea and why I now consider it wrong. Overall, this part of the work seems to me unsuccessful. I like much more the part of the job where I write about multi-leaf model of the Universe . This is an assumption that the cosmological expansion of the Universe is replaced by compression, then a new expansion in such a way that the cycles of compression - expansion are repeated an infinite number of times. Such cosmological models have long attracted attention. Different authors called them "pulsating" or "oscillating" models of the Universe. I like the term better "multi-leaf model" . It seems more expressive, more in line with the emotional and philosophical meaning of the grandiose picture of the repeated repetition of the cycles of existence.

As long as conservation was assumed, the multileaf model encountered, however, an insurmountable difficulty following from one of the fundamental laws of nature - the second law of thermodynamics.

Retreat. In thermodynamics, a certain characteristic of the state of bodies is introduced, called. My dad once remembered an old popular science book called “The Queen of the World and Her Shadow.” (Unfortunately, I forgot who the author of this book is.) The queen is, of course, energy, and the shadow is entropy. Unlike energy, for which there is a conservation law, for entropy the second law of thermodynamics establishes the law of increase (more precisely, non-decrease). Processes in which the total entropy of bodies does not change are called (considered) reversible. An example of a reversible process is mechanical movement without friction. Reversible processes are an abstraction, a limiting case of irreversible processes accompanied by an increase in the total entropy of bodies (during friction, heat transfer, etc.). Mathematically, entropy is defined as a quantity whose increase is equal to the heat influx divided by the absolute temperature (it is additionally assumed - more precisely, it follows from general principles - that the entropy at absolute zero temperature and the entropy of vacuum are equal to zero).

Numerical example for clarity. A certain body having a temperature of 200 degrees transfers 400 calories during heat exchange to a second body having a temperature of 100 degrees. The entropy of the first body decreased by 400/200, i.e. by 2 units, and the entropy of the second body increased by 4 units; The total entropy increased by 2 units, in accordance with the requirement of the second law. Note that this result is a consequence of the fact that heat is transferred from a hotter body to a colder one.

An increase in total entropy during nonequilibrium processes ultimately leads to heating of the substance. Let's turn to cosmology, to multi-leaf models. If we assume that the number of baryons is fixed, then the entropy per baryon will increase indefinitely. The substance will heat up indefinitely with each cycle, i.e. conditions in the Universe will not be repeated!

The difficulty is eliminated if we abandon the assumption of conservation of baryon charge and consider, in accordance with my idea of ​​1966 and its subsequent development by many other authors, that the baryon charge arises from "entropy" (i.e. neutral hot matter) in the early stages of cosmological expansion of the Universe. In this case, the number of baryons formed is proportional to the entropy at each expansion-compression cycle, i.e. the conditions for the evolution of matter and the formation of structural forms can be approximately the same in each cycle.

I first coined the term "multi-leaf model" in a 1969 paper. In my recent articles I use the same term in a slightly different sense; I mention this here to avoid misunderstandings.

The first of the last three articles (1979) examined a model in which space is assumed to be flat on average. It is also assumed that Einstein's cosmological constant is not zero and is negative (although very small in absolute value). In this case, as the equations of Einstein's theory of gravity show, cosmological expansion inevitably gives way to compression. Moreover, each cycle completely repeats the previous one in terms of its average characteristics. It is important that the model is spatially flat. Along with flat geometry (Euclidean geometry), the following two works are also devoted to the consideration of Lobachevsky geometry and the geometry of a hypersphere (a three-dimensional analogue of a two-dimensional sphere). In these cases, however, another problem arises. An increase in entropy leads to an increase in the radius of the Universe at the corresponding moments of each cycle. Extrapolating into the past, we find that each given cycle could have been preceded by only a finite number of cycles.

In “standard” (one-sheet) cosmology there is a problem: what was there before the moment of maximum density? In multi-sheet cosmologies (except for the case of a spatially flat model), this problem cannot be avoided - the question is transferred to the moment of the beginning of the expansion of the first cycle. One can take the view that the beginning of the expansion of the first cycle or, in the case of the standard model, the only cycle is the Moment of the Creation of the World, and therefore the question of what happened before that lies beyond the scope of scientific research. However, perhaps, just as - or, in my opinion, more - justified and fruitful is the approach that allows for unlimited scientific research of the material world and space-time. At the same time, apparently, there is no place for the Act of Creation, but the basic religious concept of the divine meaning of Being is not affected by science and lies beyond its boundaries.

I am aware of two alternative hypotheses related to the problem under discussion. One of them, it seems to me, was first expressed by me in 1966 and was subject to a number of clarifications in subsequent works. This is the “turning of the arrow of time” hypothesis. It is closely related to the so-called reversibility problem.

As I already wrote, completely reversible processes do not exist in nature. Friction, heat transfer, light emission, chemical reactions, life processes are characterized by irreversibility, a striking difference between the past and the future. If we film some irreversible process and then play the movie in the opposite direction, we will see on the screen something that cannot happen in reality (for example, a flywheel rotating by inertia increases its rotation speed, and the bearings cool). Quantitatively, irreversibility is expressed in a monotonic increase in entropy. At the same time, the atoms, electrons, atomic nuclei, etc. that are part of all bodies. move according to the laws of mechanics (quantum, but this is unimportant here), which are completely reversible in time (in quantum field theory - with simultaneous CP reflection, see in the first part). The asymmetry of the two directions of time (the presence of the “arrow of time,” as they say) with the symmetry of the equations of motion has long attracted the attention of the creators of statistical mechanics. Discussion of this issue began in the last decades of the last century and was sometimes quite heated. The solution that more or less satisfied everyone was the hypothesis that the asymmetry was due to the initial conditions of motion and the position of all atoms and fields “in the infinitely distant past.” These initial conditions must be “random” in some well-defined sense.

As I suggested (in 1966 and more explicitly in 1980), in cosmological theories that have a designated point in time, these random initial conditions should be attributed not to the infinitely distant past (t -> - ∞), but to this selected point (t = 0).

Then automatically at this point the entropy has a minimum value, and when moving forward or backward from it in time, the entropy increases. This is what I called “the turning of the arrow of time.” Since when the arrow of time turns, all processes, including informational processes (including life processes), reverse, no paradoxes arise. The above ideas about the reversal of the arrow of time, as far as I know, have not received recognition in the scientific world. But they seem interesting to me.

The rotation of the arrow of time restores the symmetry of the two directions of time inherent in the equations of motion in the cosmological picture of the world!

In 1966-1967 I assumed that at the turning point of the arrow of time, CPT reflection occurs. This assumption was one of the starting points of my work on baryon asymmetry. Here I will present another hypothesis (Kirzhnitz, Linde, Guth, Turner and others had a hand; I only have the remark here that there is a turning of the arrow of time).

Modern theories assume that vacuum can exist in various states: stable, with an energy density equal to zero with great accuracy; and unstable, having a huge positive energy density (effective cosmological constant). The latter state is sometimes called a "false vacuum".

One of the solutions to the equations of general relativity for such theories is as follows. The Universe is closed, i.e. at each moment represents a “hypersphere” of finite volume (a hypersphere is a three-dimensional analogue of the two-dimensional surface of a sphere; a hypersphere can be imagined “embedded” in four-dimensional Euclidean space, just as a two-dimensional sphere is “embedded” in three-dimensional space). The radius of the hypersphere has a minimum finite value at some point in time (let us denote it t = 0) and increases with distance from this point, both forward and backward in time. Entropy is zero for a false vacuum (as for any vacuum in general) and when moving away from the point t = 0 forward or backward in time, it increases due to the decay of the false vacuum, turning into a stable state of true vacuum. Thus, at the point t = 0 the arrow of time rotates (but there is no cosmological CPT symmetry, which requires infinite compression at the point of reflection). Just as in the case of CPT symmetry, all conserved charges here are also equal to zero (for a trivial reason - at t = 0 there is a vacuum state). Therefore, in this case it is also necessary to assume the dynamic occurrence of the observed baryon asymmetry, caused by the violation of CP invariance.

An alternative hypothesis about the prehistory of the Universe is that in fact there is not one Universe or two (as - in some sense of the word - in the hypothesis of the turning of the arrow of time), but many radically different from each other and arising from some “primary” space (or its constituent particles; this may just be a different way of saying it). Other Universes and primary space, if it makes sense to talk about it, may, in particular, have, in comparison with “our” Universe, a different number of “macroscopic” spatial and temporal dimensions - coordinates (in our Universe - three spatial and one temporal dimension; in In other Universes, everything may be different!) I ask you not to pay special attention to the adjective “macroscopic” enclosed in quotation marks. It is associated with the “compactization” hypothesis, according to which most dimensions are compactified, i.e. closed on itself on a very small scale.


Structure of the “Mega-Universe”

It is assumed that there is no causal connection between different Universes. This is precisely what justifies their interpretation as separate Universes. I call this grandiose structure the “Mega Universe.” Several authors have discussed variations of such hypotheses. In particular, the hypothesis of multiple births of closed (approximately hyperspherical) Universes is defended in one of his works by Ya.B. Zeldovich.

The Mega Universe ideas are extremely interesting. Perhaps the truth lies precisely in this direction. For me, in some of these constructions there is, however, one ambiguity of a somewhat technical nature. It is quite acceptable to assume that conditions in different regions of space are completely different. But the laws of nature must necessarily be the same everywhere and always. Nature cannot be like the Queen in Carroll's Alice in Wonderland, who arbitrarily changed the rules of the game of croquet. Existence is not a game. My doubts relate to those hypotheses that allow a break in the continuity of space - time. Are such processes acceptable? Are they not a violation of the laws of nature at the breaking points, and not the “conditions of being”? I repeat, I am not sure that these are valid concerns; Maybe, again, as in the question of conservation of the number of fermions, I am starting from too narrow a point of view. In addition, hypotheses where the birth of Universes occurs without breaking continuity are quite conceivable.

The assumption that the spontaneous birth of many, and perhaps an infinite number of Universes differing in their parameters, and that the Universe surrounding us is distinguished among many worlds precisely by the condition for the emergence of life and intelligence, is called the “anthropic principle” (AP). Zeldovich writes that the first consideration of AP known to him in the context of an expanding Universe belongs to Idlis (1958). In the concept of a multi-leaf Universe, the anthropic principle can also play a role, but for the choice between successive cycles or their regions. This possibility is discussed in my work “Multiple Models of the Universe”. One of the difficulties of multi-sheet models is that the formation of “black holes” and their merging breaks the symmetry at the compression stage so much that it is completely unclear whether the conditions of the next cycle are suitable for the formation of highly organized structures. On the other hand, in sufficiently long cycles the processes of baryon decay and black hole evaporation occur, leading to the smoothing out of all density inhomogeneities. I assume that the combined action of these two mechanisms - the formation of black holes and the alignment of inhomogeneities - leads to a successive change of “smoother” and more “disturbed” cycles. Our cycle was supposed to be preceded by a “smooth” cycle during which no black holes were formed. To be specific, we can consider a closed Universe with a “false” vacuum at the turning point of the arrow of time. The cosmological constant in this model can be considered equal to zero; the change from expansion to compression occurs simply due to the mutual attraction of ordinary matter. The duration of the cycles increases due to the increase in entropy with each cycle and exceeds any given number (tends to infinity), so that the conditions for the decay of protons and the evaporation of “black holes” are met.

Multileaf models provide an answer to the so-called large number paradox (another possible explanation is the hypothesis of Guth et al., which involves a long "inflation" stage, see Chapter 18).


A planet on the outskirts of a distant globular star cluster. Artist © Don Dixon

Why is the total number of protons and photons in a Universe of finite volume so enormously large, although finite? And another form of this question, relating to the “open” version, is why is the number of particles so large in that region of Lobachevsky’s infinite world, the volume of which is of the order of A 3 (A is the radius of curvature)?

The answer given by the multileaf model is very simple. It is assumed that many cycles have already passed since t = 0; during each cycle, entropy (i.e., the number of photons) increased and, accordingly, an increasing baryon excess was generated in each cycle. The ratio of the number of baryons to the number of photons in each cycle is constant, since it is determined by the dynamics of the initial stages of the expansion of the Universe in a given cycle. The total number of cycles since t = 0 is just such that the observed number of photons and baryons is obtained. Since their number grows exponentially, for the required number of cycles we will not even get such a large value.

A by-product of my 1982 work is a formula for the probability of gravitational coalescence of black holes (the estimate in the book by Zeldovich and Novikov was used).

Another intriguing possibility, or rather a dream, is associated with multi-leaf models. Maybe a highly organized mind, developing billions of billions of years during a cycle, finds a way to transmit in encoded form some of the most valuable part of the information it has to its heirs in subsequent cycles, separated from this cycle in time by a period of a super-dense state?.. Analogy - transmission by living beings from generation to generation of genetic information, “compressed” and encoded in the chromosomes of the nucleus of a fertilized cell. This possibility, of course, is absolutely fantastic, and I did not dare to write about it in scientific articles, but on the pages of this book I gave myself free rein. But regardless of this dream, the hypothesis of a multi-leaf model of the Universe seems to me important in a philosophical worldview.

Dear visitors!

Your work is disabled JavaScript. Please enable scripts in your browser, and the full functionality of the site will open to you!

Formulated in the form of models of the origin and development of the Universe. This is due to the fact that in cosmology it is impossible to carry out reproducible experiments and derive any laws from them, as is done in other natural sciences. In addition, each cosmic phenomenon is unique. Therefore, cosmology operates with models. As new knowledge about the surrounding world accumulates, new cosmological models are refined and developed.

Classical cosmological model

Advances in cosmology and cosmogony in the 18th-19th centuries. culminated in the creation of a classical polycentric picture of the world, which became the initial stage in the development of scientific cosmology.

This model is quite simple and understandable.

1. The Universe is considered infinite in space and time, in other words, eternal.

2. The basic law governing the movement and development of celestial bodies is the law of universal gravitation.

3. Space is in no way connected with the bodies located in it, playing the passive role of a container for these bodies.

4. Time also does not depend on matter, being the universal duration of all natural phenomena and bodies.

5. If all bodies suddenly disappeared, space and time would remain unchanged. The number of stars, planets and star systems in the Universe is infinitely large. Each celestial body goes through a long life path. The dead, or rather extinguished, stars are being replaced by new, young luminaries.

Although the details of the origin and death of celestial bodies remained unclear, basically this model seemed harmonious and logically consistent. In this form, the classical polycentric model existed in science until the beginning of the 20th century.

However, this model of the universe had several flaws.

The law of universal gravitation explained the centripetal acceleration of the planets, but did not say where the desire of the planets, as well as any material bodies, to move uniformly and rectilinearly came from. To explain the inertial motion, it was necessary to assume the existence of a divine “first push” in it, which set all material bodies in motion. In addition, God's intervention was also allowed to correct the orbits of cosmic bodies.

The appearance within the framework of the classical model of the so-called cosmological paradoxes - photometric, gravitational, thermodynamic. The desire to resolve them also prompted scientists to search for new consistent models.

Thus, the classical polycentric model of the Universe was only partially scientific in nature; it could not provide a scientific explanation of the origin of the Universe and therefore was replaced by other models.

Relativistic model of the Universe

A new model of the Universe was created in 1917 by A. Einstein. It was based on the relativistic theory of gravity - the general theory of relativity. Einstein abandoned the postulates of absoluteness and infinity of space and time, but retained the principle of stationarity, the immutability of the Universe in time and its finitude in space. The properties of the Universe, according to Einstein, are determined by the distribution of gravitational masses in it. The Universe is limitless, but at the same time closed in space. According to this model, space is homogeneous and isotropic, i.e. has the same properties in all directions, matter is distributed evenly in it, time is infinite, and its flow does not affect the properties of the Universe. Based on his calculations, Einstein concluded that world space is a four-dimensional sphere.

At the same time, one should not imagine this model of the Universe in the form of an ordinary sphere. Spherical space is a sphere, but a four-dimensional sphere that cannot be visually represented. By analogy, we can conclude that the volume of such space is finite, just as the surface of any ball is finite; it can be expressed in a finite number of square centimeters. The surface of any four-dimensional sphere is also expressed in a finite number of cubic meters. Such a spherical space has no boundaries, and in this sense it is limitless. Flying in such space in one direction, we will eventually return to the starting point. But at the same time, a fly crawling along the surface of the ball will nowhere find boundaries or barriers that prohibit it from moving in any chosen direction. In this sense, the surface of any ball is limitless, although finite, i.e. limitlessness and infinity are different concepts.

So, from Einstein’s calculations it followed that our world is a four-dimensional sphere. The volume of such a Universe can be expressed, although very large, but still by a finite number of cubic meters. In principle, you can fly around the entire closed Universe, moving all the time in one direction. Such an imaginary journey is similar to earthly trips around the world. But the Universe, finite in volume, is at the same time limitless, just as the surface of any sphere has no boundaries. Einstein's Universe contains, although a large, but still finite number of stars and stellar systems, and therefore the photometric and gravitational paradoxes are not applicable to it. At the same time, the specter of heat death looms over Einstein’s Universe. Such a Universe, finite in space, inevitably comes to its end in time. Eternity is not inherent in it.

Thus, despite the novelty and even revolutionary nature of the ideas, Einstein in his cosmological theory was guided by the usual classical ideological attitude of the static nature of the world. He was more attracted to a harmonious and stable world than to a contradictory and unstable world.

Expanding Universe Model

Einstein's model of the Universe became the first cosmological model based on the conclusions of the general theory of relativity. This is due to the fact that it is gravity that determines the interaction of masses over large distances. Therefore, the theoretical core of modern cosmology is the theory of gravity - the general theory of relativity. Einstein assumed in his cosmological model the presence of a certain hypothetical repulsive force, which was supposed to ensure the stationarity and immutability of the Universe. However, the subsequent development of natural science made significant adjustments to this idea.

Five years later, in 1922, the Soviet physicist and mathematician A. Friedman, based on rigorous calculations, showed that Einstein’s Universe cannot be stationary and unchanging. At the same time, Friedman relied on the cosmological principle he formulated, which is based on two assumptions: the isotropy and homogeneity of the Universe. The isotropy of the Universe is understood as the absence of distinguished directions, the sameness of the Universe in all directions. The homogeneity of the Universe is understood as the sameness of all points of the Universe: we can conduct observations at any of them and everywhere we will see an isotropic Universe.

Friedman, based on the cosmological principle, proved that Einstein’s equations have other, non-stationary solutions, according to which the Universe can either expand or contract. At the same time, we were talking about expanding the space itself, i.e. about the increase in all the distances in the world. Friedman's universe resembled an inflating soap bubble, with both its radius and surface area continuously increasing.

Initially, the model of the expanding Universe was hypothetical and did not have empirical confirmation. However, in 1929, the American astronomer E. Hubble discovered the effect of “red shift” of spectral lines (shift of lines towards the red end of the spectrum). This was interpreted as a consequence of the Doppler effect - a change in oscillation frequency or wavelength due to the movement of the wave source and observer relative to each other. "Redshift" was explained as a consequence of galaxies moving away from each other at a rate that increases with distance. According to recent measurements, the increase in expansion rate is approximately 55 km/s for every million parsecs.

As a result of his observations, Hubble substantiated the idea that the Universe is a world of galaxies, that our Galaxy is not the only one in it, that there are many galaxies separated by enormous distances. At the same time, Hubble came to the conclusion that intergalactic distances do not remain constant, but increase. Thus, the concept of an expanding Universe appeared in natural science.

What kind of future awaits our Universe? Friedman proposed three models for the development of the Universe.

In the first model, the Universe expands slowly so that due to the gravitational attraction between different galaxies, the expansion of the Universe slows down and eventually stops. After this, the Universe began to shrink. In this model, space bends, closing on itself, forming a sphere.

In the second model, the Universe expanded infinitely, and space was curved like the surface of a saddle and at the same time infinite.

In Friedman's third model, space is flat and also infinite.

Which of these three options follows the evolution of the Universe depends on the ratio of gravitational energy to the kinetic energy of the expanding matter.

If the kinetic energy of the expansion of matter prevails over the gravitational energy that prevents the expansion, then gravitational forces will not stop the expansion of galaxies, and the expansion of the Universe will be irreversible. This version of the dynamic model of the Universe is called the open Universe.

If gravitational interaction predominates, then the rate of expansion will slow down over time until it stops completely, after which the compression of matter will begin until the Universe returns to its original state of singularity (a point volume with an infinitely high density). This version of the model is called the oscillating, or closed, Universe.

In the limiting case, when the gravitational forces are exactly equal to the energy of the expansion of matter, the expansion will not stop, but its speed will tend to zero over time. Several tens of billions of years after the expansion of the Universe begins, a state will occur that can be called quasi-stationary. Theoretically, a pulsation of the Universe is also possible.

When E. Hubble showed that distant galaxies are moving away from each other at an ever-increasing speed, an unambiguous conclusion was made that our Universe is expanding. But an expanding Universe is a changing Universe, a world with all its history, having a beginning and an end. The Hubble constant allows us to estimate the time during which the process of expansion of the Universe continues. It turns out that it is no less than 10 billion and no more than 19 billion years. The most probable lifetime of the expanding Universe is considered to be 15 billion years. This is the approximate age of our Universe.

Scientist's opinion

There are other, even the most exotic, cosmological (theoretical) models based on the general theory of relativity. Here's what Cambridge University mathematics professor John Barrow says about cosmological models:

“The natural task of cosmology is to understand as best as possible the origin, history and structure of our own Universe. At the same time, general relativity, even without borrowing from other branches of physics, makes it possible to calculate an almost unlimited number of very different cosmological models. Of course, their selection is made on the basis of astronomical and astrophysical data, with the help of which it is possible not only to test various models for compliance with reality, but also to decide which of their components can be combined for the most adequate description of our world. This is how the current standard model of the Universe arose. So even for this reason alone, the historical diversity of cosmological models has been very useful.

But it's not only that. Many models were created when astronomers had not yet accumulated the wealth of data they have today. For example, the true degree of isotropy of the Universe was established thanks to space equipment only during the last two decades. It is clear that in the past space modelers had many fewer empirical constraints. In addition, it is possible that even models that are exotic by today’s standards will be useful in the future for describing those parts of the Universe that are not yet accessible to observation. And finally, the invention of cosmological models may simply stimulate the desire to find unknown solutions to the general relativity equations, and this is also a powerful incentive. In general, the abundance of such models is understandable and justified.

The recent union of cosmology and particle physics is justified in the same way. Its representatives consider the earliest stage of the life of the Universe as a natural laboratory, ideally suited for studying the basic symmetries of our world, which determine the laws of fundamental interactions. This union has already laid the foundation for a whole fan of fundamentally new and very deep cosmological models. There is no doubt that in the future it will bring no less fruitful results.”

In the beginning, the Universe was an expanding clump of emptiness. Its collapse led to the Big Bang, in the fire-breathing plasma of which the first chemical elements were forged. Then gravity compressed the cooling gas clouds for millions of years. And then the first stars lit up, illuminating a grandiose Universe with trillions of pale galaxies... This picture of the world, supported by the greatest astronomical discoveries of the 20th century, stands on a solid theoretical foundation. But there are specialists who don’t like it. They persistently look for weak points in it, hoping that a different cosmology will replace the current one.

In the early 1920s, St. Petersburg scientist Alexander Friedman, assuming for simplicity that matter uniformly fills all space, found a solution to the equations of general relativity (GTR), which describe the non-stationary expanding Universe. Even Einstein did not take this discovery seriously, believing that the Universe must be eternal and unchanging. To describe such a Universe, he even introduced a special “anti-gravity” lambda term into the general relativity equations. Friedman soon died of typhoid fever, and his decision was forgotten. For example, Edwin Hubble, who worked on the world's largest 100-inch telescope at Mount Wilson Observatory, had not heard anything about these ideas.

By 1929, Hubble had measured the distances to several dozen galaxies and, comparing them with previously obtained spectra, unexpectedly discovered that the farther away a galaxy is, the more redshifted its spectral lines are. The easiest way to explain the red shift was the Doppler effect. But then it turned out that all the galaxies were quickly moving away from us. It was so strange that astronomer Fritz Zwicky put forward a very bold hypothesis of “tired light”, according to which it is not galaxies that are moving away from us, but light quanta during a long journey experiencing some resistance to their movement, gradually losing energy and turning red. Then, of course, they remembered the idea of ​​expanding space, and it turned out that no less strange new observations fit well into this strange forgotten theory. Friedman’s model also benefited from the fact that the origin of the red shift in it looks very similar to the usual Doppler effect: even today, not all astronomers understand that the “scattering” of galaxies in space is not at all the same as the expansion of space itself with “frozen” ones. galaxies into it.

The “tired light” hypothesis quietly faded from the scene by the end of the 1930s, when physicists noted that a photon loses energy only by interacting with other particles, and in this case the direction of its movement necessarily changes at least slightly. So the images of distant galaxies in the “tired light” model should blur, as if in a fog, but they are visible quite clearly. As a result, the Friedmann model of the Universe, an alternative to generally accepted ideas, has recently won everyone’s attention. (However, until the end of his life, in 1953, Hubble himself admitted that the expansion of space could only be an apparent effect.)

Twice alternative standard

But since the Universe is expanding, it means it was denser before. Mentally reversing its evolution, Friedman's student, nuclear physicist Georgi Gamow, concluded that the early Universe was so hot that thermonuclear fusion reactions took place in it. Gamow tried to explain with them the observed prevalence of chemical elements, but he managed to “cook” only a few types of light nuclei in the primary cauldron. It turned out that, in addition to hydrogen, the world should contain 23-25% helium, a hundredth of a percent of deuterium and a billionth of lithium. The theory of the synthesis of heavier elements in stars was later developed with his colleagues by Gamow’s competitor, astrophysicist Fred Hoyle.

In 1948, Gamow also predicted that an observable trace should remain from the hot Universe - cooled microwave radiation with a temperature of several degrees Kelvin, coming from all directions in the sky. Alas, Gamow’s prediction repeated the fate of Friedman’s model: no one was in a hurry to look for its radiation. The theory of a hot Universe seemed too extravagant to carry out expensive experiments to test it. In addition, parallels were seen in it with divine creation, from which many scientists distanced themselves. It ended with Gamow abandoning cosmology and switching to genetics, which was emerging at that time.

In the 1950s, a new version of the theory of a stationary Universe, developed by the same Fred Hoyle together with astrophysicist Thomas Gold and mathematician Hermann Bondi, gained popularity in the 1950s. Under pressure from Hubble's discovery, they accepted the expansion of the Universe, but not its evolution. According to their theory, the expansion of space is accompanied by the spontaneous creation of hydrogen atoms, so that the average density of the Universe remains unchanged. This, of course, is a violation of the law of conservation of energy, but an extremely insignificant one - no more than one hydrogen atom per billion years per cubic meter of space. Hoyle called his model “the theory of continuous creation” and introduced a special C-field (from the English creation - creation) with negative pressure, which forced the Universe to inflate, while maintaining a constant density of matter. In defiance of Gamow, Hoyle explained the formation of all elements, including light ones, by thermonuclear processes in stars.

The cosmic microwave background predicted by Gamow was accidentally noticed almost 20 years later. Its discoverers received the Nobel Prize, and the hot Friedmann-Gamow Universe quickly supplanted competing hypotheses. Hoyle, however, did not give up and, defending his theory, argued that the microwave background was generated by distant stars, the light of which was scattered and re-emitted by cosmic dust. But then the glow of the sky should be spotty, but it is almost perfectly uniform. Gradually, data was accumulated on the chemical composition of stars and cosmic clouds, which were also consistent with Gam’s model of primary nucleosynthesis.

Thus, the twice-alternative theory of the Big Bang became generally accepted, or, as it is fashionable to say today, turned into the scientific mainstream. And now schoolchildren are taught that Hubble discovered the explosion of the Universe (and not the dependence of the red shift on distance), and cosmic microwave radiation, with the light hand of the Soviet astrophysicist Joseph Samuilovich Shklovsky, becomes a relict radiation. The model of the hot Universe is “stitched” into people’s minds literally at the level of language.

Four Causes of Redshift

Which one should you choose to explain Hubble's law - the dependence of redshift on distance?

Laboratory tested

Not laboratory tested

Frequency change

1. Doppler effect

Occurs when the source of radiation is removed. Its light waves arrive at our receiver a little less often than they are emitted by the source. The effect is widely used in astronomy to measure the speed of movement of objects along the line of sight.

3. Expansion of space

According to the general theory of relativity, the properties of space itself can change over time. If this results in an increase in the distance between the source and the receiver, then the light waves are stretched in the same way as in the Doppler effect.

Energy Change

2. Gravitational redshift

When a quantum of light escapes from a gravitational well, it expends energy to overcome the forces of gravity. A decrease in energy corresponds to a decrease in the frequency of radiation and its shift to the red side of the spectrum.

4. Light fatigue

Perhaps the movement of a light quantum in space is accompanied by a kind of “friction,” that is, a loss of energy proportional to the path traveled. This was one of the first hypotheses put forward to explain the cosmological redshift.

Digging under the foundations

But human nature is such that as soon as another undeniable idea takes hold in society, there are immediately people who want to argue. Criticism of standard cosmology can be divided into conceptual, pointing out the imperfection of its theoretical foundations, and astronomical, citing specific facts and observations that are difficult to explain.

The main target of conceptual attacks is, of course, the general theory of relativity (GTR). Einstein gave a surprisingly beautiful description of gravity, identifying it with the curvature of space-time. However, from general relativity it follows the existence of black holes, strange objects in the center of which matter is compressed into a point of infinite density. In physics, the appearance of infinity always indicates the limits of applicability of a theory. At ultra-high densities, general relativity must be replaced by quantum gravity. But all attempts to introduce the principles of quantum physics into general relativity have failed, which forces physicists to look for alternative theories of gravity. Dozens of them were built in the 20th century. Most did not withstand experimental testing. But a few theories still hold. Among them, for example, is the field theory of gravity by Academician Logunov, in which there is no curved space, no singularities arise, which means there are no black holes or the Big Bang. Wherever the predictions of such alternative theories of gravity can be tested experimentally, they agree with those of general relativity, and only in extreme cases - at ultra-high densities or at very large cosmological distances - do their conclusions differ. This means that the structure and evolution of the Universe must be different.

New cosmography

Once upon a time, Johannes Kepler, trying to theoretically explain the relationships between the radii of planetary orbits, nested regular polyhedra into each other. The spheres described and inscribed in them seemed to him the most direct path to unraveling the structure of the universe - “The Cosmographic Mystery,” as he called his book. Later, based on the observations of Tycho Brahe, he discarded the ancient idea of ​​​​the celestial perfection of circles and spheres, concluding that the planets move in ellipses.

Many modern astronomers are also skeptical about the speculative constructions of theorists and prefer to draw inspiration by looking at the sky. And there you can see that our Galaxy, the Milky Way, is part of a small cluster called the Local Group of galaxies, which is attracted to the center of a huge cloud of galaxies in the constellation Virgo, known as the Local Supercluster. Back in 1958, astronomer George Abel published a catalog of 2,712 galaxy clusters in the northern sky, which, in turn, are grouped into superclusters.

Agree, it does not look like a Universe uniformly filled with matter. But without homogeneity in the Friedman model it is impossible to obtain an expansion regime consistent with Hubble's law. And the amazing smoothness of the microwave background cannot be explained either. Therefore, in the name of the beauty of the theory, the homogeneity of the Universe was declared a Cosmological principle, and observers were expected to confirm it. Of course, at small distances by cosmological standards—a hundred times the size of the Milky Way—the attraction between galaxies dominates: they move in orbit, collide and merge. But, starting from a certain distance scale, the Universe simply must become homogeneous.

In the 1970s, observations did not yet allow us to say with certainty whether structures larger than a couple of tens of megaparsecs existed, and the words “large-scale homogeneity of the Universe” sounded like a protective mantra of Friedmann’s cosmology. But by the beginning of the 1990s the situation had changed dramatically. On the border of the constellations Pisces and Cetus, a complex of superclusters measuring about 50 megaparsecs was discovered, which includes the Local Supercluster. In the constellation Hydra, they first discovered the Great Attractor with a size of 60 megaparsecs, and then behind it a huge Shapley supercluster three times larger. And these are not isolated objects. At the same time, astronomers described the Great Wall, a complex 150 megaparsecs long, and the list continues to grow.

By the end of the century, the production of 3D maps of the Universe was put on stream. In one telescope exposure, spectra of hundreds of galaxies are obtained. To do this, a robotic manipulator places hundreds of optical fibers in the focal plane of the wide-angle Schmidt camera at known coordinates, transmitting the light of each individual galaxy to the spectrographic laboratory. The largest SDSS survey to date has already determined the spectra and redshifts of a million galaxies. And the largest known structure in the Universe remains the Great Wall of Sloan, discovered in 2003 according to the previous CfA-II survey. Its length is 500 megaparsecs, which is 12% of the distance to the horizon of the Friedmann Universe.

Along with concentrations of matter, many deserted regions of space have also been discovered - voids, where there are no galaxies or even mysterious dark matter. Many of them exceed 100 megaparsecs in size, and in 2007 the American National Radio Astronomy Observatory reported the discovery of a Great Void with a diameter of about 300 megaparsecs.

The very existence of such grandiose structures challenges standard cosmology, in which inhomogeneities develop due to the gravitational crowding of matter from tiny density fluctuations left over from the Big Bang. At the observed natural speeds of motion of galaxies, they cannot travel more than a dozen or two megaparsecs during the entire lifetime of the Universe. And how then can we explain the concentration of a substance measuring hundreds of megaparsecs?

Dark Entities

Strictly speaking, Friedman’s model “in its pure form” does not explain the formation of even small structures - galaxies and clusters, unless we add to it one special unobservable entity, invented in 1933 by Fritz Zwicky. While studying the Coma cluster, he discovered that its galaxies were moving so fast that they should easily fly away. Why doesn't the cluster disintegrate? Zwicky suggested that its mass was much greater than estimated from luminous sources. This is how hidden mass appeared in astrophysics, which today is called dark matter. Without it, it is impossible to describe the dynamics of galactic disks and galaxy clusters, the bending of light when passing by these clusters, and their very origin. It is estimated that there is 5 times more dark matter than normal luminous matter. It has already been established that these are not dark planetoids, not black holes, and not any known elementary particles. Dark matter probably consists of some kind of heavy particles that participate only in weak interactions.

Recently, the Italian-Russian satellite experiment PAMELA detected a strange excess of energetic positrons in cosmic rays. Astrophysicists do not know a suitable source of positrons and suggest that they may be the products of some kind of reaction with dark matter particles. If so, then Gamow’s theory of primordial nucleosynthesis may be at risk, because it did not assume the presence of a huge number of unknown heavy particles in the early Universe.

The mysterious dark energy had to be urgently added to the standard model of the Universe at the turn of the 20th and 21st centuries. Not long before this, a new method for determining distances to distant galaxies was tested. The “standard candle” in it was the explosions of supernovae of a special type, which at the very height of the outbreak always have almost the same luminosity. Their apparent brightness is used to determine the distance to the galaxy where the cataclysm occurred. Everyone expected that measurements would show a slight slowdown in the expansion of the Universe under the influence of self-gravity of its matter. With great surprise, astronomers discovered that the expansion of the Universe, on the contrary, is accelerating! Dark energy was invented to provide the universal cosmic repulsion that inflates the Universe. In fact, it is indistinguishable from the lambda term in Einstein's equations and, what is funnier, from the C-field from the Bondi-Gold-Hoyle theory of a stationary universe, in the past the main competitor of the Friedmann-Gamow cosmology. This is how artificial speculative ideas migrate between theories, helping them survive under the pressure of new facts.

If Friedman’s original model had only one parameter determined from observations (the average density of matter in the Universe), then with the advent of “dark entities” the number of “tuning” parameters increased noticeably. These are not only the proportions of the dark “ingredients”, but also their arbitrarily assumed physical properties, such as the ability to participate in various interactions. Isn't it true that all this is reminiscent of Ptolemy's theory? More and more epicycles were added to it, too, to achieve consistency with observations, until it collapsed under the weight of its own overcomplicated design.

DIY Universe

Over the past 100 years, a great variety of cosmological models have been created. If earlier each of them was perceived as a unique physical hypothesis, now the attitude has become more prosaic. To build a cosmological model, you need to deal with three things: the theory of gravity, on which the properties of space depend, the distribution of matter, and the physical nature of the redshift, from which the dependence is derived: distance - redshift R(z). This sets the cosmography of the model, which makes it possible to calculate various effects: how the brightness of a “standard candle,” the angular size of a “standard meter,” the duration of a “standard second,” and the surface brightness of a “reference galaxy” change with distance (or rather, with redshift). All that remains is to look at the sky and understand which theory gives the correct predictions.

Imagine that in the evening you are sitting in a skyscraper by the window, looking at the sea of ​​city lights stretching below. There are fewer of them in the distance. Why? Perhaps there are poor outskirts there, or even development has completely ended. Or maybe the light from the lanterns is dimmed by fog or smog. Or the curvature of the Earth’s surface affects it, and distant lights simply go beyond the horizon. For each option, you can calculate the dependence of the number of lights on the distance and find a suitable explanation. This is how cosmologists study distant galaxies, trying to choose the best model of the Universe.

For the cosmological test to work, it is important to find “standard” objects and take into account the influence of all interference that distorts their appearance. Observational cosmologists have been struggling with this for eight decades. Take, say, the angular size test. If our space is Euclidean, that is, not curved, the apparent size of galaxies decreases in inverse proportion to the redshift z. In Friedmann's model with curved space, the angular sizes of objects decrease more slowly, and we see galaxies slightly larger, like fish in an aquarium. There is even a model (Einstein worked with it in the early stages), in which galaxies first decrease in size as they move away, and then begin to grow again. The problem, however, is that we see distant galaxies as they were in the past, and during the course of evolution their sizes can change. In addition, at a great distance, foggy spots appear smaller - due to the fact that it is difficult to see their edges.

It is extremely difficult to take into account the influence of such effects, and therefore the result of a cosmological test often depends on the preferences of a particular researcher. In a huge array of published works, one can find tests that both confirm and refute a variety of cosmological models. And only the professionalism of the scientist determines which of them to believe and which not. Here are just a couple of examples.

In 2006, an international team of three dozen astronomers tested whether distant supernova explosions stretched out over time, as required by Friedmann's model. They received complete agreement with the theory: flashes lengthen exactly as many times as the frequency of the light coming from them decreases - time dilation in general relativity has the same effect on all processes. This result could have been another final nail in the coffin of the theory of a stationary Universe (the first one 40 years ago was named by Stephen Hawking as the cosmic microwave background), but in 2009, American astrophysicist Eric Lerner published exactly the opposite results obtained by a different method. He used the surface brightness test for galaxies, invented by Richard Tolman back in 1930, specifically to make a choice between an expanding and a static universe. In the Friedmann model, the surface brightness of galaxies falls very quickly with increasing redshift, and in Euclidean space with “tired light” the decay is much slower. At z = 1 (where, according to Friedman, galaxies are about half as young as those near us), the difference is 8-fold, and at z = 5, which is close to the limit of the Hubble Space Telescope's capabilities, it is more than 200-fold. The test showed that the data almost perfectly coincides with the “tired light” model and strongly diverges from Friedman’s.

Ground for doubt

Observational cosmology has accumulated a lot of data that cast doubt on the correctness of the dominant cosmological model, which, after adding dark matter and energy, began to be called LCDM (Lambda - Cold Dark Matter). A potential problem for LCDM is the rapid increase in record redshifts of detected objects. Masanori Iye, an employee of the Japanese National Astronomical Observatory, studied how the record open redshifts of galaxies, quasars and gamma-ray bursts (the most powerful explosions and the most distant beacons in the observable Universe) grew. By 2008, all of them had already overcome the z = 6 threshold, and the record z of gamma-ray bursts grew especially rapidly. In 2009, they set another record: z = 8.2. In Friedman's model, this corresponds to an age of about 600 million years after the Big Bang and fits to the limit with existing theories of galaxy formation: any more, and they simply will not have time to form. Meanwhile, progress in z indicators does not seem to be stopping - everyone is waiting for data from the new Herschel and Planck space telescopes, launched in the spring of 2009. If objects with z = 15 or 20 appear, it will become a full-blown LCDM crisis.

Another problem was noticed back in 1972 by Alan Sandage, one of the most respected observational cosmologists. It turns out that Hubble's law holds all too well in the immediate vicinity of the Milky Way. Within a few megaparsecs from us, matter is distributed extremely inhomogeneously, but the galaxies do not seem to notice this. Their redshifts are exactly proportional to their distances, except for those that are very close to the centers of large clusters. The chaotic speeds of galaxies seem to be dampened by something. Drawing an analogy with the thermal motion of molecules, this paradox is sometimes called the anomalous coldness of the Hubble flow. There is no comprehensive explanation for this paradox in LCDM, but it receives a natural explanation in the “tired light” model. Alexander Raikov from the Pulkovo Observatory hypothesized that the redshift of photons and the damping of the chaotic velocities of galaxies may be a manifestation of the same cosmological factor. And the same reason may explain the anomaly in the movement of the American interplanetary probes Pioneer 10 and Pioneer 11. As they left the solar system, they experienced a small, inexplicable slowdown, just the right amount numerically to explain the coldness of the Hubble stream.

A number of cosmologists are trying to prove that matter in the Universe is distributed not uniformly, but fractally. This means that no matter what scale we consider the Universe, it will always reveal an alternation of clusters and voids of the corresponding level. The first to raise this topic was the Italian physicist Luciano Piotroneiro in 1987. And a few years ago, St. Petersburg cosmologist Yuri Baryshev and Pekka Teerikorpi from Finland published an extensive monograph “The Fractal Structure of the Universe.” A number of scientific articles claim that in redshift surveys, the fractal nature of the distribution of galaxies is confidently revealed up to a scale of 100 megaparsecs, and heterogeneity is traced up to 500 megaparsecs and more. And recently, Alexander Raikov, together with Viktor Orlov from St. Petersburg State University, discovered signs of a fractal distribution in the catalog of gamma-ray bursts on scales up to z = 3 (that is, according to the Friedmann model in most of the visible Universe). If this is confirmed, cosmology is in for a major shake-up. Fractality generalizes the concept of homogeneity, which, for reasons of mathematical simplicity, was taken as the basis of 20th-century cosmology. Today, fractals are actively studied by mathematicians, and new theorems are regularly proven. The fractality of the large-scale structure of the Universe can lead to very unexpected consequences, and who knows whether radical changes in the picture of the Universe and its development await us ahead?

Cry from the heart

And yet, no matter how inspired cosmological “dissidents” are by such examples, today there is no coherent and well-developed theory of the structure and evolution of the Universe that differs from the standard LCDM. What is collectively called alternative cosmology consists of a number of claims that are rightly raised by proponents of the generally accepted concept, as well as a set of promising ideas of varying degrees of sophistication that may be useful in the future if a strong alternative research program emerges.

Many proponents of alternative views tend to overemphasize individual ideas or counterexamples. They hope that by demonstrating the difficulties of the standard model, it can be abandoned. But, as the philosopher of science Imre Lakatos argued, neither experiment nor paradox can destroy a theory. Only a new, better theory kills a theory. There is nothing to offer for an alternative cosmology yet.

But where will new serious developments come from, the “alternatives” complain, if all over the world, in grant committees, in the editorial offices of scientific journals and in commissions on the distribution of observation time of telescopes, the majority are supporters of standard cosmology. They, they say, simply block the allocation of resources to work that lies outside the cosmological mainstream, considering it a useless waste of funds. Several years ago, tensions reached such a height that a group of cosmologists wrote a very harsh “Open Letter to the Scientific Community” in New Scientist magazine. It announced the establishment of the international public organization Alternative Cosmology Group (www. cosmology. info), which has since periodically held its own conferences, but has not yet been able to significantly change the situation.

The history of science knows many cases when a powerful new research program was unexpectedly formed around ideas that were considered deeply alternative and of little interest. And, perhaps, the current disparate alternative cosmology carries within itself the germ of a future revolution in the picture of the world.



We recommend reading

Top