• 沒有找到結果。

挑戰廣義相對論在極端尺度下的非線性效應

N/A
N/A
Protected

Academic year: 2022

Share "挑戰廣義相對論在極端尺度下的非線性效應"

Copied!
67
0
0

加載中.... (立即查看全文)

全文

(1)

國立臺灣大學理學院物理學研究所 碩士論文

Department of Physics College of Science

National Taiwan University Master Thesis

挑戰廣義相對論在極端尺度下的非線性效應

Confronting General Relativistic Nonlinearities in Macroscopic and Microscopic Universes

蔣序文 Chiang Hsu-Wen

指導教授:陳丕燊 博士 Advisor: Pisin Chen, Ph.D.

中華民國 壹零伍 年 陸 月

June, 2016

(2)
(3)

Acknowledgments

First, I would like to thank my advisor Prof. Pisin Chen, who spent count- less hours on our weekly progress meetings and guided us through this amaz- ing journey of physics study. I would also want to mention that if it were not for his wonderful lectures of cosmology, I would have been in other fields and may not have chances to involve in so many projects and met so many people in the group.

I’m deeply grateful to my collaborators Yao-Chieh Hu, Fabien Nugier, and Antonio Enea Romano. For Antonio Enea Romano, I appreciate your teaching of MATHEMATICA™ and also deeper and more complex parts of general relativity. I also thank your patience since I was still in the junior year of undergraduate when I started my first project. For Yao-Chieh Hu, I am glad that you also graduate this year. For Fabien Nugier, our weekly discussions have been very fruitful.

I also appreciate useful discussions with Prof. Ron Adler, Prof. Abhay Ashtekar, Prof. Yeong-Chuan Kao and Dr. Peter Scicluna. Without you, some parts of this thesis may never be finished.

I would want to thank for your aspiring lectures and your will to share great knowledges in physics, to Prof. Pao-Ti Chang, Prof. Yih-Yuh Chen, Prof. Jiunn-Wei Chen, Prof. Shou-Huang Dai, Prof. Xiao-Gang He, Prof.

Pei-Ming Ho, Prof. Keisuke Izumi, Prof. Yeong-Chuan Kao, Prof. Jiwoo Nam, and Prof. Mu Tao Wang.

Your enthusiasm, your classmate-ship, your discussions on physics just give me the most incredible study experience of my life, to Che-Yu Chen, Chien-Ting Chen, Chun-Yen Chen, Hsiao-Yi Chen, Po-Ta Chen, Yi-Chun Chen, Chia-Wei Hsing, Ke-Chih Lin, Yu-Hsiang Lin, Hua-Ciao Lyu, Yen Chin Ong, and Che-Min Shen.

Finally, I just want to thank two more people. When I was 8 years old I went to Taipei astronomical museum for the first time and became obsessed with black holes, main sequence stars, etc. My father and mother then took me to the museum almost weekly. They support me through all these years.

So for Jack and Lisa, thank you.

This thesis is supported by National Center for Theoretical Sciences (NCTS) of Taiwan, Ministry of Science and Technology (MOST) of Taiwan, and the Leung Center for Cosmology and Particle Astrophysics (LeCosPA) of Na- tional Taiwan University.

(4)
(5)

中 中文 文 文摘 摘 摘要 要 要

本碩士論文依據作者過去發表的文章[2, 9]分成兩大部分。

首先,我們認為與以往的微擾計算結果相反,廣義相對論的非線性

效應會讓宇宙小尺度的不均勻性足以改變超新星亮度距離與紅移的關

係,使得從超新星推得的哈伯常數與從宇宙微波背景輻射得來的數值 不同。我們計算並顯示已知的3.4個標準差的差距確實可以用一個約莫 三億秒差距大小的空洞來解釋,同時該空洞與亮物質密度觀測數據暗

示可能存在的空洞大小位置相符。

然後在第二部分,我們基於廣義相對論中常常出現的旋量變數,透

過將弦世界面理論對稱轉變為旋量對稱來建立一個新形式的時空量子

化。由於是從幾種常見的重力理論共有的特性出發,我們相信此理論 可以將如量子迴圈重力理論與超弦理論等熱門理論連結起來。我們也 推導了廣義測不準原理並顯示理論具有全相性。由於時空被量子化,

世界線會變得比較模糊。我們計算了模糊的程度,並顯示即便是在宇 宙學尺度下也極難測量到該現象。因此我們無需擔心這個時空量子化 理論會與任何宇宙觀測結果相衝突。

(6)
(7)

Abstract

The whole thesis is divided into two parts, each of which is based on papers[2, 9] published before.

First, we suggest that contrary to the usual perturbation result, the in- creasingly severe Hubble parameter tension between observations by utiliz- ing low-redshift supernovae luminosity distance and the cosmological mi- crowave background can be explained away by considering the nonlinear ef- fect of the local inhomogeneity. We also compare the density profile from galaxy survey to what we obtained from the assumption that the tension of Hubble parameter comes solely from the local inhomogeneity, and find that they agree with each other.

Second, we introduce a new type of spacetime quantization based on the spinorial description suggested by loop quantum gravity. Specifically, we build our theory on a string theory inspired Spin(3, 1) worldsheet action. Be- cause of its connection with quantum gravity theories, our proposal may in principle link back to string theory, connect to loop quantum gravity where SU(2) is suggested as the fundamental symmetry, or serve as a Lorentzian spin network. We derive the generalized uncertainty principle and demon- strate the holographic nature of our theory. Due to the quantization of space- time, geodesics in our theory are fuzzy, but the fuzziness is shown to be much below conceivable astrophysical bounds, which makes our theory safe from deleterious effects.

(8)
(9)

Contents

Acknowledgments i

中 中

中文文文摘摘摘要要要 iii

Abstract v

Contents vii

List of Figures ix

List of Tables xi

1 Introduction 1

1.1 Standard Model of Cosmology . . . 2

2 Distance Measurement in GR 5

2.1 Anchors and the Cosmic Ladders . . . 6 2.2 Tension on H0between Supernovae and CMB Measurements . . . 7

3 Inhomogeneity of the Universe 9

4 Data Analysis 13

4.1 Linear Regression and χ2 Analysis . . . 13 4.2 Monte Carlo + Local Optimization . . . 15

5 Easing H0 Tension by Invoking Local Inhomogeneity 19

5.1 Geodesic Equation and the Initial Condition . . . 19 5.2 Mapping DLback to Density Contrast . . . 22 5.3 Result . . . 24

6 Quantization of Spacetime 31

6.1 Dimensional Reduction of Momentum Space . . . 32 6.2 Angular Momentum Space and U (su(2)) algebra . . . 33 6.3 Adler’s Spinorial Spacetime . . . 34

7 Spinorial Spacetime 35

7.1 Reinterpretation, Reformulation and Correction to Adler’s Proposal . . . 35 7.2 Obtainbing Action Through Fermionization . . . 39 7.3 Composite and Holographic Nature of the Spacetime . . . 42 7.4 Generalized Uncertainty Principle . . . 44

(10)

7.5 Smearing Effect . . . 45

8 Conclusion and Future Work 47

8.1 Macroscopic Universe . . . 47 8.2 Microscopic Universe . . . 47

Appendices 49

Bibliography 51

(11)

List of Figures

4.1 Work-flow of the fitting procedure. ∆m stands for difference between observed magnitude and the predicted value from the FRW model with PLANCK 2016 parameters. Here LR stands for lineara regression, NL for nonlinear, param. for parameters, and CL for confidence level. The double arrow consists of n copies of flows, each of which has a different data point deleted. The idea of inversion will be introduced in the next chapter. . . 17

5.1 This plot shows the sky map of all SNe and cepheids in our dataset. Three fields are specified in Keenan’s work [7] as the three regions with density contrast data. Our targets of interest are field 1 and field 3 which contain enough data points to fit the luminosity distance curve. For the sake of clarity we will keep using the same color for field 1 and field 3 as [7] later on. . . 25 5.2 This plot shows the 68% confidence band of the field 3 ∆m fit, along

with the data points in this region. The deleted data points are in a darker color. The dashed curves are the 68% confidence band envelop and the vanilla curve is the best fit. The fitting model is chosen to be 5 functions of the form Φ(r) = r3 according to the dimensional argument of the polyharmonic spline interpolation method. The gray curve is the result from Riess 2016 [3]. . . 26 5.3 This plot shows the 68% confidence band of the field 1 ∆m fit, along with

the data points in this region. The deleted data points are in a darker color.

The dashed curves are the 68% confidence band envelop and the vanilla curve is the best fit. The fitting model is chosen to be a simple constant shift. The gray curve is the result from Riess 2016 [3]. . . 27

(12)

5.4 This plot shows the 68% confidence band of the inverted density contrast of the field 3, with K0 = −0.1. Clearly we can see a ∼68% significant 10% under-dense around z = 0.02 to 0.08 or 100 ∼ 400 Mpc. One can directly compare this plot to Keenan’s using conversion d(M pc) = H0−1z = 4400 z M pc. One important feature in Keenan’s result is the overdense region at around z = 0.1, and as we can see such feature is in the 68% confidence band of our result. The gray curve is the inverted density contrast of the FRW model with parameters from Riess 2016 [3]. . 28 5.5 This plot shows the 68% confidence band of the inverted density contrast

of the field 1, with K0 = −0.1. Clearly we can see a ∼95% signifi- cant 10% under-dense everywhere. One can directly compare this plot to Keenan’s using conversion d(M pc) = H0−1z = 4400 z M pc, and find that the two agree with each other pretty well. The gray curve is the inverted density contrast of the FRW model with parameters from Riess 2016 [3]. . 29 5.6 This plot shows the inverted density contrast of the best fit in the field 3,

under different K0. The blue, green, red curves correspond to K0 = −0.1, 0, and 0.1 respectively. . . 30

(13)

List of Tables

7.1 A commutativity table showing possible ways of labelling Hilbert space.

For elements Tmn inside the table, “O” means m-th basis commutes with n-th basis, and “X” means non-commutativity. Here all vectors are along spatial eigen-direction ni = h∆Xii, and ∆ ~X = ni∆Xi is the spa- tial interval, ∆V3 = ∆X1∆X2∆X3 is the time-like 3-volume, ∆~V =

−niijk∆X0∆Xj∆Xk is the spatial 3-volume, ∆t is the time difference,

∆s2 is the proper distance square, ∆ ~A = niijk∆Xj∆Xk is the spatial area, ∆ ~At = ni∆X0∆Xiis the time-like area, and ∆V4 = ∆X0∆X1∆X2∆X3 is the 4-volume. Notice that actually ~V can always be described by prod- ucts of two non-trivial quantum numbers in the system. . . 37

(14)
(15)

Chapter 1 Introduction

General relativity (GR) has been hailed as one of the most elegant and successful theo- ries in the history of physics. It has passed numerous experimental tests, and also laid the foundation of the standard model of cosmology. But however marvellous it may be there are still pit holes that puzzle physicists generation after generation. The most bizarre feature of GR is probably its non-linearity. In theories that are linear, e.g. Maxwell the- ory, doubling the charge density would not result in anything special, at least classically.

However, in GR if one throws too much mass to the same location, a black hole may be created, and the characteristics of the system changes completely around the event horizon.

On the other hand, in the standard model (SM) of cosmology the universe is consid- ered homogeneous and isotropic at the large scale limit, and small structures like super- clusters or galaxies are just perturbations on top of that, which tweak the observation slightly at higher orders. This statement seems in contradiction with the non-linearity of GR, and a natural question thus arises. Are these inhomogeneities really incapable of modifying cosmological observations significantly? We suggest that in fact there are evi- dences of drastic deviation from SM of cosmology in local Hubble parameter (H0) mea- surements using supernovae (SNe) as standard candles. In this thesis we point out that the tension between H0measurements based on cosmological microwave background (CMB) and SNe can be explained away by positing a void around 300 mega-parsec (Mpc) wide.

We develop a technique that can convert the luminosity distance (DL) measurement back to the density contrast profile, and show that indeed it is to some degrees similar to what was observed through galaxy surveys [7].

The non-linearity of GR also becomes the blocking stone when one tries to quantize it. A n¨aive perturbation series expansion would result in non-renormalizable divergences simply because every energy, including gravitational energy, gravitates. One thus con- cludes that additional features must be added. For the past fifty years physicists have

(16)

introduced various features, including supersymmetry, higher dimensions, strings, and tetrads in the pursuing of a quantum theory of gravitation. Interestingly both loop quan- tum gravity (LQG) and superstring theory, the two most popular quantum gravity theories, exhibit a minimal distance, suggesting that the spacetime may be quantized. On the other hand, many formalisms, including Newman-Penrose formalism, Bondi-Metzner-Sachs symmetry on null infinity and loop quantum gravity, are based on the spinor description.

We therefore construct a new spacetime quantization model based on spinorial variables, and show that it has many interesting properties.

For the sake of readability, IJ K are for SO(3, 1) tangent bundle coordinate indices, µνλρ for 4-d coordinate, ijk for 3-d coordinate, αβγ for 2-d coordinate, and (i)(j)(k) for site numbers. Gamma matrices γ are defined according to the type of the indices (Minkowskian if unspecified). We also use natural units c = } = G = 1, ˙a = ∂a∂t, a,r= ∂a∂r, and the signature of g is chosen to be (+, −, −, −).

1.1 Standard Model of Cosmology

As we mentioned in ch.1 the standard model of cosmology is built on an assumption that the universe is homogeneous and isotropic at the large scale limit. The metric correspond- ing to this assumption is the Friedmann–Robertson–Walker (FRW) metric

ds2F RW = dt2− a (t)2

 1

1 − kr2dr2+ r2dΩ2



, (1.1)

where a is the scale factor that defines how the universe “expands” or “collapses” in the sense that every matter comoves as the universe expands or collapses, dΩ2 is the line element for unit 2-sphere, and k is the spatial curvature that determines the geometry of the 3-d space (k = 1 for sphere, k = 0 for flat plane and k = −1 for hyperbola). From CMB observations it has been known that k ∼ 0, i.e., the universe is almost flat. The spatial curvature is very often confused with the spacetime curvature, but actually it only describes the curvature of space. So even in flat FRW universe the curvature, i.e. the matter density, does not vanish.

From Einstein Field Equation (EFE) one immediately obtains the Friedmann equa- tions

H2 = 8π 3 ρ − k

a2

3 , (1.2)

H = −˙ 4π

3 (ρ + 3p) +Λ

3 , (1.3)

where H = ˙a/a is the Hubble parameter, ρ is the matter density, p is the pressure, and Λ is the cosmological constant.

(17)

Clearly without the cosmological constant the usual dust-like matter (p = 0) would results in a dynamical universe with a variable scale factor. Discovery of the Hubble’s law proves that we are living in a dynamical universe, since further the astronomical object is faster it is moving away from us. This apparent universal retreating is called “redshift”, and can be explained not by the Doppler effect but by the expansion of the universe that stretches the electromagnetic (EM) wave and reddens the light. Hubble’s law can be expressed as

z ≈ H0D , (1.4)

where H0 is the Hubble parameter of the present day, D is the distance, z = aa0 − 1 is the redshift, and a0is the scale factor of the present day.

It may seem all fair given the observational fact, but one immediately realize that by winding back the clock, the universe is actually coming from an extremely dense and hot point with a = 0, i.e., the big bang. One of the most successful prediction of the cosmology is that there exist residual lights of this primordial hot gas, i.e., the cosmological microwave background (CMB). The observed temperature of 3K also sets zCM B = 1100.

There is still a loophole in the entire argument. How can one side of CMB have the temperature as the other side of it? To answer this one needs to introduce the idea of inflation: a rapid expansion period before CMB which allows the causal connection between two ends of the sky. Na¨ıvely the cosmological constant serves the role, but since we need to stop such a rapid expansion, what we need is some kind of dynamical cosmological constant.

The simplest realization is through a scalar field called inflaton. With a scalar field φ the Friedmann eqs. become

H2 = 8π 3

 1 2

φ˙2+ V (φ)



, (1.5)

H = −4π ˙˙ φ2. (1.6)

To make sure the inflaton is similar to the cosmological constant, we introduce the idea of slow roll:

H2 = 1 3



8πV (φ) − ˙H

≈ 8π

3 V (φ) , (1.7)

which immediately leads to

 = 1 16π

 V0 V

2

 1 , (1.8)

η = 1 8π

V00

V  1 . (1.9)

(18)

 and η are the slow roll parameters that characterize the inflation. Given a roughly con- stant acceleration, there is a horizon at H−1 that has a temperature of H/2π, which then perturbs the inflaton field. The fluctuation of the inflation field then seeds the inhomogene- ity that grows into large scale structures and CMB anisotropies. In SM of cosmology, the impact on the observations due to the inhomogeneity has been studied thoroughly up to first order. However as we argued at the beginning of this chapter, the non-linearity of GR creeps in as the perturbation grows into filaments and voids. In the following chapters we will explain how such effect would modify one of the most important measurements, the distance measurement in an unexpected way.

(19)

Chapter 2

Distance Measurement in GR

In usual experiments, using a ruler is the easiest way to measure the distance. However this method is not applicable at the astronomical scale. To measure such great distance one must rely on something that can travel through the vacuum, e.g. EM wave. Using Earth’s orbit as the ruler, parallax can measure distances to lots of astronomical objects that are within our own Milky Way (MW). This is also how one defines the “parsec” (pc):

an object having parallax movement of 1 arcsecond is 1pc away.

For objects that are even further from us, parallax is not so useful as the intrinsic scale of the method is the circular orbit of Earth. One therefore must rely on non-geometrical methods. The simplest way would be the standard candle. Given a candle with known luminosity at a certain distance, one can easily get the distance of that candle by

DL = rL0

L D0, (2.1)

where L0 is the known luminosity at distance D0, L is the measured luminosity and DL is the luminosity distance. There are several types of objects with almost constant energy outputs regardless of where or how old they are, and astronomers use these so called standard candles of cosmology to determine the distance. Cepheids and SNe are probably the most famous two as both of them are very bright, have very accurately measured energy outputs, and the light curves are well-explained.

The distances obtained in three different ways mentioned above are exactly the same in Minkowski spacetime. But in curved spacetime the equivalence is not guaranteed. In flat FRW universe one can easily obtained the three distances as following:

DC(z) = Z z

0

dz0

H (z0) , DA(z) = DC(z)

1 + z , DL(z) = (1 + z) DC(z) . (2.2) Here the comoving distance DC(z) describes the current distance of an object of redshift z measured by a ruler, the angular diameter distance DA(z) describes the relation be- tween angular motion and transverse motion, and DLis precisely the luminosity distance measured using standard candle.

(20)

To certain degree the redshift is the ultimate distance measure tool once H0 and the constants in eq. (1.2) are determined. However as we can see the relation between three distances depends on the geometry of the spacetime, or the matter distribution itself.

Therefore it opens the possibility that the inhomogeneity may alter these relations and thus induce some effects on the distance measurements, and thus on the measurement of H0.

2.1 Anchors and the Cosmic Ladders

As we mentioned in the last section, to obtain the luminosity distance one needs a standard candle with a fully explained light curve and an accurately measured energy output. But in reality one cannot really measure the total energy output as these standard candles are still astronomical objects. What one really know is just their magnitude

m = −2.5 log10L , (2.3)

where the luminosity L is in a predefined unit. Therefore some specific objects with distances measured in other ways are used to determine the so called absolute magnitude M (expected magnitude of the object at 10pc). These special objects are called anchors.

For example, MW cepheids are the anchors to determine the absolute magnitude of the cepheids since they are close enough to use parallax to determine the distance. Cepheids in turn determine the absolute magnitude of SNe by comparing SNe to cepheids within their host galaxies. SNe then determine the relation between redshift and distance by determining H0 and constants in eq. (1.2). This series of anchoring is called cosmic distance ladder. Clearly the accuracy of this long chain of measurements could easily be compromised by unexpected contaminations coming from local inhomogeneities. Later on we will analyse one specific part of the ladder, the determination of the SNe absolute magnitude and H0.

Another way to measure H0 is through the observation of CMB. The sound horizon of baryons during CMB era can be fully determined by cosmological models fitted using CMB data. This sound horizon then leaves an imprint on the CMB photons (since it did couple to baryons) and provides us a standard ruler at z = 1100! This ruler then can be used in the same way as parallax, i.e., using angular size of sound horizon to determine the angular diameter distance to CMB. Therefore we obtain yet another anchor and due to extremely precise measurements of CMB we can even determine H0using CMB alone.

(21)

2.2 Tension on H

0

between Supernovae and CMB Mea- surements

As mentioned in the last section, in addition to the distance ladder in the low redshift, one can also use CMB to determine the value of H0. However both values depends on the un- derlying cosmological model. The SNe measurement requires multiple anchors and thus surfers from the possible contamination due to local inhomogeneity. For CMB measure- ment the determination of the sound horizon is even more model dependent. Nevertheless, one can still compare the two values directly under the same model.

For vanilla FRW model, H0is determined as

H0SN = 73.24 ± 1.74 kms−1M pc−1, (2.4) H0CM B = 66.93 ± 0.62 kms−1M pc−1, (2.5) where H0SN comes from Riess [3], and H0CM B comes from PLANCK2016 data [4]. A 3.4 σ difference between the two is 99.9% not a fluke, and such strong tension implies that indeed there are contaminations from either the model-choosing process or the sys- tematics of the measurement. Considering the fact that PLANCK, WMAP, and various baryon acoustic oscillation-based results all lead to H0around 66 to 69, while according to [3] all anchors leads to H0around 72 to 74 (ignoring MW cepheids as they are heavily contaminated by the peculiar motion), it seems highly unlikely that such systematic error would exist. The only possibility remaining is that new physics beyond traditional FRW model must be considered.

We provide an easy way to ease the tension, by considering the possibility that local inhomogeneities may contaminate certain important anchors and ruin the entire cosmic ladder. According to [7], there seems to be a large void nearby, and interestingly the cepheids Riess used to determine the absolute magnitude of SNe are mostly very close to the void, further strengthened the possibility of such contamination. In the next three chapters we will show that indeed the inconsistency between SNe and CMB measure- ments could be explained by the local inhomogeneities.

(22)
(23)

Chapter 3

Inhomogeneity of the Universe

Since what we are interested in is the low redshift SNe and cepheids, the cosmological constant is negligible. We will ignore it from now on, except when constructing the density contrast where the background is chosen to be FRW model with PLANCK2016 parameters [4].

Considering the fact that lights also get attracted by the gravitational field, an over- dense region would attract more light, thus decreases the observed luminosity distance.

In linear limit one would think that an underdense region would be less attractive than nearby region and increases the observed luminosity distance. One can draw the same conclusion from the lensing equation

κ = 3 2H02m

Z χS

0

dχχS− χ

χS χδC(χ)(1 + z) , (3.1) where κ = D0/Dmod− 1 is the convergence of the light, Dmod is the altered luminosity distance, Ωm is the matter density ratio, δC is the density contrast, χ is the comoving distance and χS is the comoving distance to the source. As we can see

 Dmod D0

0

(z) ∝ −δC(z) (3.2)

when z is small. Another effect comes from the Doppler effect of the matter outflow due to inhomogeneity. The authors in [8] have calculated the Doppler effect through the velocity field. However these perturbative results actually require δC to be small. When this is not the case we must use the full nonlinear EFE. To simplify the calculation, people usually focus on certain types of restricted models.

In the case of a radially inhomogeneous spherically symmetric dust-like system with an observer at the center, the solution of EFE is the Lemaˆıtre-Tolman-Bondi (LTB) metric.

This is not an open violation of the Copernican principle, since we only consider it as an approximation to the large scale structures by assuming we are at the center of a void and by ignoring all other voids and filaments around us.

(24)

The Lemaˆıtre-Tolman-Bondi solution can be written as ds2 = dt2− (R,r)2dr2

1 + 2 E − R2dΩ2, (3.3)

where R = R(t, r) is the angular diameter distance, E = E(r) is an arbitrary function of r.

The EFE gives

˙R R

!2

= 2E(r)

R2 + 2M (r)

R3 , (3.4)

ρ(t, r) = 2M,r

R2R,r, (3.5)

with M = M (r) being an arbitrary function of r. The solution can be expressed paramet- rically in terms of a time variable η =Rt

dt0/R(t0, r) as R(η, r) =˜ M (r)

−2E(r) h

1 − cosp

−2E(r)ηi

, (3.6)

t(η, r) = M (r)

−2E(r)

"

η − 1

p−2E(r)sinp

−2E(r)η

#

+ tb(r) , (3.7)

where ˜R has been introduced to clarify the distinction between the two functions R(t, r) and ˜R(η, r) which are trivially related by R(t, r) = ˜R(η(t, r), r), and tb(r) is another arbitrary function of r, called the bang function, which corresponds to the fact that big- bang/crunches can happen at different times.

We introduce the variables a(t, r) = R(t, r)

r , k(r) = −2E(r)

r2 , ρ0(r) = 6M (r)

r3 , (3.8)

so that EFE are written in a form similar to those for the FRW metric:

ds2 = dt2− a2



1 + a,rr a

2 dr2

1 − k(r)r2 + r2dΩ22



, (3.9)

 ˙a a

2

= −k(r)

a2 + ρ0(r)

3a3 , (3.10)

ρ(t, r) = (ρ0r3),r

3a2r2(ar),r . (3.11)

The solution of equations above can now be written using η as

˜

a(˜η, r) = ρ0(r) 6k(r)

h

1 − cosp

k(r) ˜ηi

, (3.12)

t(˜η, r) = ρ0(r) 6k(r)

"

˜

η − 1

pk(r)sinp

k(r) ˜η

#

+ tb(r) , (3.13)

(25)

where ˜η ≡ η r =Rt

dt0/a(t0, r) .

Clearly we can see that the density ρ(t, r) is directly related to the scale factor a which is then related to spatial curvature k(r). Therefore one can easily describe the inhomogeneity in terms of the luminosity distance which is exactly (1 + z)2R. This relation involves the inversion of the radial null geodesic equations, and is called the inversion problem, which will be discussed in ch. 5. In the rest of the thesis we will use this last set of equations with simultaneous big bang tb = 0 and drop the tilde to make the notation simpler. Furthermore, without loss of generality, we may set the function ρ0(r) to be a constant ρ0(r) = ρ0 = constant.

(26)
(27)

Chapter 4

Data Analysis

Our targets of interest are the low redshift SNe and cepheids that affect H0 measurement considerably. To do so we need a clean dataset. Our dataset comes from two different groups. For SNe the data comes from Union2.1 catalogue [5], and for the cepheids the data comes from Riess’ 2016 paper [3]. We calibrate the old Union2.1 data according to the difference between Riess’ new result and the old calibrator Union 2.1 catalogue was using [6], by the formula

m − M = 25 − 5 log10H0+ 5 log10(H0dL)

≈ 25 − 5 log10H0+ 5 log10z . (4.1)

To investigate the effect of the inhomogeneity we further include the angular position data from SIMBAD astronomical database. This complete database thus allows us to con- struct a fit that takes directional dependence into account. We construct a fitting program that helps us extracting a model-independent global fitting formula for the luminosity distance, as will be discussed in the following sections.

4.1 Linear Regression and χ

2

Analysis

Given a dataset with N data points one of the simplest way to obtain a smooth function is the linear regression

f ( ~X) =

n

X

m=1

wmbm( ~X) , (4.2)

where bmis the m-th basis and wmis the associated weight. The total number of basis n is restricted to be no greater than the number of data points, such that the system is not under-determined. To obtain the best fit weight one tries to minimize the total error of the

(28)

fit in terms of χ2:

χ2 =

N

X

i=1

 yi− fi ei

2

, (4.3)

where yi is the i-th data point value, ei is the associated error, and fi is the fitted value f ( ~Xi) at i-th data point position ~Xi. Assuming Gaussian distribution of error the likeli- hood of fitted values {fi} would be e−χ2, and the minimization process is equivalent to the maximization of the likelihood of the fit. Such best fit can be described as

W = H−1· YW , (4.4)

where YW ≡ {yi/ei}, W ≡ {wm}, H = him = e−1i bm( ~Xi), and the inverse here is the Moore-Penrose pseudo-inverse that minimizes χ2.

To visualize this process one can imagine that the fitting likelihood distribution is a n- dimensional Gaussian distribution in the Rnspace of {(yi− fi)/ei}, and a null fit f = 0 is at YW position. A linear regression fit can be visualized as the closest point to the origin on a hyperplane of dimension n spanned by H passing through YW. We can therefore separate χ2 into an unexplained part (distance from the origin to that closest point) and an explained part (distance from the closest point to the null fit point). We may also calculate the probability distribution of χ2 on a n-d plane with residual χ2 = χ20 as

P (χ2) = π−n/2

Z χ−χ0

0

e−x2V (Sn−1)dxn, (4.5)

where V (Sn−1) is the volume of a standard (n − 1) dimensional ball. This is the famous cumulative probability of the χ2distribution. The so called 68% or 95% confidence band is precisely the envelop of all possible fits that have accumulative probability less than 68% or 95%. At first sight the computation of the envelop seems to be an extremely complicated process, but luckily for linear regression the boundary of the envelop is the same as a hypersphere on the hyperplane with radius given by the inversion of P (χ2).

Such hypersphere can be directly written down as χ2P − χ20

n = tr (H · δW)2 = χ2P − χ20

n tr H · H−1· n2

, (4.6)

where δW is the change of the weight on the boundry, and n is an arbitrary unit vector.

By Cauchy inequality the boundary of the envelop can be described as

± sup{B · δW} ∝ ± sup

n

{B · H−1· n} = ±

B · H−1

, (4.7)

where B = {bm} is the n-dimensional basis vector.

(29)

4.2 Monte Carlo + Local Optimization

A natural extension to the linear regression model would be the inclusion of non-linear parameters

f ( ~X) =

n

X

m=1

wmbm({pam} ; ~X) , (4.8)

where {pam} are some nonlinear parameters for basis bm. Assuming there are k non-linear parameters in total, one can again visualize the fitting process as finding the minimum on a n + k dimensional curved hypersurface. Since it is no longer a linear/flat system, many linear algebra techniques we used in the last section do not apply. The only two methods remaining are Monte Carlo method and local optimization method.

Local optimization method, as its name suggests, is the usual variational method when searching for a local minimum. Since usually the basis is analytic, one can easily compute the gradient of χ2 and use the steepest descend method to approach the local minimum in an iterative way. But just like the search of the ground state in a many-body QM system, most of the time a local minimum is not the global minimum. Therefore we need a complementary method that allows us to probe lots of local minima and then pick the minimum of local minima as the probed global minimum. Randomly generating a list of initial points on the hypersurface, i.e., the Monte Carlo (MC) method, fits our need perfectly. One of the most annoying problem of MC method is that one needs a prior probability distribution of parameters to generate a list of initial points that are

“reasonably good”. Luckily for us the basis we chose has a very special form:

bm( ~X) = Φ

X − ~~ Xm



, (4.9)

where Φ is a very simple monotonic function (r2Lln r or r2L+1 for L ∈ N), and ~Xm are the non-linear parameters. Using monotonic function as the basis requires them to be put as close to the data points as possible, so one gets a reasonable prior distribution from the distribution of the data points. To speed up the MC process further, we optimize the distribution by rebuilding the distribution using the parameters emitted by the local optimization program. By doing so we speed up the MC process by more than 100 times.

Once we get the best fit from MC plus local optimization, we can regard its fitting χ2 as the unexplained part of the χ2 distribution and find the corresponding χ2 for the P -confidence band. Here we make an approximation that the hypersurface is almost flat, since it is computationally forbidden to construct the hypersurface integral. The error of χ2introduced through this way is of the order ¯R, the average of the scalar curvature on the hypersurface around the best fit. Usually ¯R is small, and the approximation is valid. The envelop of all possible fits that have χ2smaller than what is obtained for the P -confidence

(30)

band, would be the P -confidence band for the nonlinear fit. For outliers we follow the routine in Riess’ 2016 paper [3], i.e. the classic “global but removing single largest outlier at a time” method. For a complete work-flow chart please see fig. 4.1.

(31)

Scan through and ignore one entry in the dataset

respectively

∆m dataset

Delete that data point permanantly MC for NL param. and

LR for linear param.

Position distribution of reduced dataset

Refine param. locally (LO)

Remove the data point that minimize χ2Rafter

removal

Is the removed point 3-σ away?

Last LO result (without last deletion)

MC+LR Distribution of

optimized param.

Fits with different values of param.

P%-CL distance modulo band MC over linear param.

and invert distance modulo into density

contrast

P%-CL density contrast band Input

Work Stage Test

Result Main Flow Vector Flow Data Flow

Best 5%

Yes

No Within P%-CL

Reduced dataset Within P%-CL

Figure 4.1: Work-flow of the fitting procedure. ∆m stands for difference between ob- served magnitude and the predicted value from the FRW model with PLANCK 2016 parameters. Here LR stands for lineara regression, NL for nonlinear, param. for parame- ters, and CL for confidence level. The double arrow consists of n copies of flows, each of which has a different data point deleted. The idea of inversion will be introduced in the

(32)
(33)

Chapter 5

Easing H 0 Tension by Invoking Local Inhomogeneity

In this chapter we will talk about the inversion method and show the comparison between density contrast derived from the inversion method using observed luminosity distance fitted previously and the observed density contrast from the galaxy survey.

5.1 Geodesic Equation and the Initial Condition

The luminosity distance for an observer at the center of a LTB space as a function of the redshift is given by

DL(z) = (1 + z)2R (t(z), r(z)) = (1 + z)2r(z)a (η(z), r(z)) , (5.1)

where

t(z), r(z) or

(η(z), r(z)

is the solution of the radial null geodesic equations.

The past-directed radial null geodesic is given by dT (r)

dr = f (T (r), r) , f (t, r) = −R,r(t, r)

p1 + 2E(r), (5.2)

where T (r) is the time coordinate along the geodesic as a function of the coordinate r.

Applying the definition of redshift it is possible to obtain dη

dz = ∂rt(η, r) − F (η, r)

(1 + z)∂ηF (η, r) = p(η, r) , (5.3) dr

dz = − a(η, r)

(1 + z)∂ηF (η, r) = q(η, r) . (5.4)

(34)

where we have used the following identities

f (t(η, r), r) = F (η, r) , (5.5)

f (t(η, r), r) =˙ 1

a∂ηF (η, r) , (5.6)

R,r(t, r) = ∂rR(t(η, r), r) + ∂ηR(t(η, r), r)∂rη , (5.7) F (η, r) = − 1

p1 − k(r)r2 [∂r(a(η, r)r) + ∂η(a(η, r)r)∂rη]

= − 1

p1 − k(r)r2 ∂r(a(η, r)r) − ∂η(a(η, r)r)a(η, r)−1rt . (5.8) The functions p, q, F have an explicit analytical form which can be obtained from a(η, r) and t(η, r). Using this approach the coefficients of geodesic equations are fully analytical, which is a significant improvement over previous methods which required a numerical integration of the Einstein’s equations to obtain the function R(t, r).

Before deriving the set of differential equations for the solution of the inversion method it is important to analyze how many independent initial conditions we need to fix. Our final goal will be to set and solve a set of differential equations in redshift space starting from the center, where by definition z = 0. Given our choice of coordinates the model will be fully determined by the functions k(z), r(z), η(z), corresponding to three initial conditions

r(0) = 0 η(0) = η0

k(0) = k0. (5.9)

The system of differential equation we will derive only involves derivatives of order one respect to the redshift, so these initial conditions will be enough. Given the assumption of the central location of the observer we have r0 = 0, while the observed value of the local Hubble parameter H0 corresponds to another constraint among the central values k0, η0, so only one of them is independent. After defining the Hubble rate as

HLT B = ∂ta(t, r)

a(t, r) = ∂ηa(η, r)

a(η, r)2 (5.10)

we need to impose the two following conditions

a(η0, 0) = a0, (5.11)

HLT B0, 0) = H0, (5.12)

where a0is, as expected, an arbitrary parameter, η0 is the value of the generalized confor- mal time coordinate η corresponding to the central observer today, and H0is the observed value of the local Hubble parameter.

(35)

After re-writing the solution in terms of the following more convenient dimensionless quantities

a(T, r) =

a00Msin2

1

2TpK(r)

K(r) , (5.13)

t(T, r) = H0−10M 2K(r)

"

T − 1

pK(r)sinp

K(r) T

#

+ tb(r) , (5.14)

k(r) = (a0H0)2K(r) , (5.15)

η = T (a0H0)−1, (5.16)

ρ0 = 3Ω0ma30H02. (5.17)

We can impose two conditions a(η0, 0) = a0 and HLT B0, 0) = H0 for Ω0M and T0 to finally get the initial conditions and the exact solution in this form

a(T, r) =

a0(K0+ 1) sin2

1

2TpK(r)

K(r) , (5.18)

t(T, r) = H0−11 + K0 2K(r)

"

T − 1

pK(r)sinp

K(r) T

#

+ tb(r) , (5.19)

K0 = K(0) , (5.20)

T0 = arctan (2√ K0)

√K0 , (5.21)

0m = K0+ 1 . (5.22)

Since we have three unknown {Ω0m, T0, K0} and two constraints, one of them can always remain free, and the other two can be expressed in terms of it. Here we chose K0

to be the free parameter, but we could equivalently chose another one. The above form of the solution is particularly useful to explore the full class of LTB models. Since K0 is a free parameter which determines the central value of the dimensionless conformal time variable T0, the realness condition sets a lower bound K0 > −1. H0 is also a free parameter which can be set according to observations and fixes the scale for the definition of the dimensionless quantities K(r), T, Ω0m. This means that we can arbitrarily fix K0

and H0 as long as we impose the correct initial condition given above.

As expected a0 does not appear in observable quantities such as the cosmic time t(η, r), and it can be fixed to 1. In this way we can self-consistently determine all the necessary initial conditions and we are left with the freedom to fix K0 arbitrarily. As we will see later actually the change of K0does not modify the density contrast too much, so the model is determined once DLis given.

(36)

5.2 Mapping D

L

back to Density Contrast

In the previous section we have seen that it is possible to derive a fully analytical set of radial null geodesics equations. Our goal now is to use these equations to obtain a new set of differential equations to map an observed DL(z) to a LTB model. In the coordinates we chose, a LTB solution is determined uniquely by the function k(r) , so we will have a total of three independent functions to solve for η(z), r(z), k(z). Since we have already two differential equation for the geodesics, we need an extra differential equation.

This can be obtained by differentiating with respect to the redshift the luminosity distance DL(z)

d dz

 DLobs(z) (1 + z)2



= ∂(ra(η, r))

∂η

dz +∂(ra(η, r))

∂r

dr

dz = s(z) (5.23)

where DobsL (z) is the observed luminosity distance. In our case we will use the best fit function obtained using the method developed in ch.4. Now we have the set of equations we were looking for

dz = p(η(z), r(z)) = p(z) , (5.24)

dr

dz = q(η(z), r(z)) = q(z) , (5.25)

d dz

 DLobs(z) (1 + z)2



= s(z) . (5.26)

Since we will solve our differential equations with respect to the the variable z, we need to transform the partial derivatives respect to η and r in eqs.(5.3,5.4) according to the chain rule:

∂h(η, r)

∂r

(η=η(z),r=r(z))

= ∂h(η(z), r(z)) dz

dz

dr , (5.27)

∂h(η, r)

∂η

(η=η(z),r=r(z))

= ∂h(η(z), r(z)) dz

dz

dη. (5.28)

where h(η, r) is a generic function in the coordinates (η, r). After this substitution the equations contain only functions of the redshift z, and derivatives respect to z. The differ- ential equations obtained in this form need to be further manipulated in order to re-write them in a canonical form in which the derivatives appear all on one side, since after the application of the chain rule to eqs.(5.3,5.4) derivative terms likedr(z)dz ,dη(z)dz ,dk(z)dz are also on the right-hand side. After a rather complicated algebraic manipulation done using

(37)

MATHEMATICA™ we get : 0 = 4t2K0(z)

(3 + 2t2)p

K(z)r(z) + (3 + t2)SX − 3tS

− 8t3(1 + z)K(z)2r0(z)T0(z)

− 2tK(z)r(z)K0(z)(3(1 + t2)T (z) + (3 + 5t2)(1 + z)T0(z))

+ K(z)3/2(−8t4r0(z) + 3(1 + t2)2(1 + z)r(z)T (z)K0(z)T0(z)) (5.29) 0 = 2t(1 + z) (3 + 5t2)r(z)K0(z) + 4t2K(z)r0(z)

− 8p

K(z)t4S + 6(1 + t2)2(1 + z)r(z)XK(z) (5.30)

0 = 2K(z)



(1 + K0)t2r0(z) − (1 + t2)K(z)H0 d dz

 DobsL (z) (1 + z)2



− 2(1 + K0)t r(z) (t − X) K(z) − K(z)3/2T0(z)

(5.31) In the above expressions we have expressed all the trigonometric functions in terms of the equivalent expressions in terms of tan(X) according to

S = p

1 − K(z)r(z)2, (5.32)

t = tan(X) , (5.33)

X = 1 2

pK(z)T (z) . (5.34)

We have also used the dimensionless version of the solution in terms of K(z), T (z) de- rived in the previous section.

As it can be seen the above three equations are not linear in the derivative terms, but the second one only involves {r0(z), K0(z)}, while the other two involve all the three functions {r0(z), K0(z), T0(z)}. This suggests that we can first solve for r0(z) in terms of only K0(z):

r0(z) = 1

8t3(1 + z)K(z)

 8t4p

K(z)S − 6(1 + z)t r(z)K0(z)

− 10(1 + z)t3r(z)K0(z) + 6Xr(z)K0(z)(1 + z) 1 + t22

(5.35) and then substitute into other 2 eqs. to get:

K0(z) = t(2tK(z)3/2 9(1 + t2)r(z) + (3 + t2)ST (z)

− 4t2SK(z)

3 − 2tp

K(z)T0(z)

− K(z)(8t4S (1 + z)−1+ 3(3 + 4t2+ t4)r(z)T (z)K(z))) (5.36) T0(z) = 1 + K0

4t (−6t(1 + 3t2)r(z)K(z) +p

K(z)(8t4S (1 + z)−1 + (3 + 10t2+ 3t4)r(z)T (z)K(z)) + 8t2K(z)3/2r(z)T0(z)

− 8t(1 + t2) (1 + K0)−1K(z)2H0 d dz

 DobsL (z) (1 + z)2



) (5.37)

(38)

These two equations now only involve K0(z), T0(z) in a linear form, so they can be solved directly, and then the result for K0(z) can be substituted in the equation for r0(z).

After some rather cumbersome algebraic manipulations we finally get:

dT (z)

dz = 2pK(z)

3t(1 + K0)r(z)×

"

Rz(z)

1 + (1 + 3t2)pK(z)r(z) 2

pK(z)r(z) − t S

− (1 + K0)t3S (1 + t2) (1 + z)K(z)3/2

#

, (5.38)

dr(z)

dz = − S

3 t (t2X − 3t + 3X) ×

" Rz(z)K(z)

t (3 + 5t2) − 3 (1 + t2)2X (1 + K0)

−pK(z)r(z) + t S + 2 t2(2t3 − 3t2X + 3t − 3X)

(1 + t2) (1 + z)pK(z)

#

, (5.39)

dK(z)

dz = 4t2pK(z)Rz(z)

3(1 + K0) (1 + t2) (1 + z)r(z) (t2X − 3t + 3X)×

"

Rz(z) (1 + t2) (1 + z)K(z)3/2

−pK(z)r(z) + t S − (1 + K0)t2

#

, (5.40)

where Rz(z) = H0 d dz



DLobs(z) (1+z)2

 . The density can be expressed as

ρ =H0(1 + t2)2k(z)3 (1 + K0)t4

 H0

3 (1 + t2)2X − 3t − 5t3 (1 + K0)t2(t2X − 3t + 3X)

+ 2 √

1 − S2− St (2t3− 3t2X + 3t − 3X) (1 + z)Rz(z) (1 + t2) k(z)3/2(t2X − 3t + 3X)

!

. (5.41)

Now we are ready to convert the luminosity distance into the density contrast.

5.3 Result

Here we show our preliminary results. Since we have not yet obtained the data from Keenan [7], we are not able to include their plots of observational data of density contrast.

As our goal is to compare our inverted density contrast with the one obtained in [7], we will follow their syntax and define fields 1, 2, 3 as what are shown in fig. 5.1. In the same figure we can also find that only field 1s and 3 contain enough data points, so we will analyze these two fields only. After removing 5 outliers, we successfully fit mobs− mF RW in field 3 with a reduced χ2 ∼ 0.77, and show that indeed SNe in field 3 are brighter than expected in fig.5.2. Statistically the fitting also passes the null hypothesis as the reference

(39)

Figure 5.1: This plot shows the sky map of all SNe and cepheids in our dataset. Three fields are specified in Keenan’s work [7] as the three regions with density contrast data.

Our targets of interest are field 1 and field 3 which contain enough data points to fit the luminosity distance curve. For the sake of clarity we will keep using the same color for field 1 and field 3 as [7] later on.

model [3] has a larger reduced χ2 ∼ 0.91 . In contrast as shown in fig.5.3, for field 1 where most higher redshift SNe lie in, the fit we get after removing 4 outliers is a simple shift in magnitude. The reduced χ2 ∼ 0.55 is again much lower than what vanilla FRW model could achieve [3]. Finally we invert each fitted curve within the 68% confidence band and get an envelop for the density contrast as shown in fig.5.4 and 5.5. According to sec.5.1 K0is not fixed, but actually the density contrast is almost independent of K0as shown in fig.5.6. So we decide to choose a specific K0 = −0.1 as an example since we believe that we are actually in a void. Finally we compare this inverted density profile to the one from [7]. Qualitatively our results for fields 1 and 3 is consistent with what was observed through luminous density, indicating that indeed local structures could alter the luminosity distance significantly.

(40)

Figure 5.2: This plot shows the 68% confidence band of the field 3 ∆m fit, along with the data points in this region. The deleted data points are in a darker color. The dashed curves are the 68% confidence band envelop and the vanilla curve is the best fit. The fitting model is chosen to be 5 functions of the form Φ(r) = r3 according to the dimensional argument of the polyharmonic spline interpolation method. The gray curve is the result from Riess 2016 [3].

(41)

Figure 5.3: This plot shows the 68% confidence band of the field 1 ∆m fit, along with the data points in this region. The deleted data points are in a darker color. The dashed curves are the 68% confidence band envelop and the vanilla curve is the best fit. The fitting model is chosen to be a simple constant shift. The gray curve is the result from Riess 2016 [3].

(42)

Figure 5.4: This plot shows the 68% confidence band of the inverted density contrast of the field 3, with K0 = −0.1. Clearly we can see a ∼68% significant 10% under-dense around z = 0.02 to 0.08 or 100 ∼ 400 Mpc. One can directly compare this plot to Keenan’s using conversion d(M pc) = H0−1z = 4400 z M pc. One important feature in Keenan’s result is the overdense region at around z = 0.1, and as we can see such feature is in the 68% confidence band of our result. The gray curve is the inverted density contrast of the FRW model with parameters from Riess 2016 [3].

(43)

Figure 5.5: This plot shows the 68% confidence band of the inverted density contrast of the field 1, with K0 = −0.1. Clearly we can see a ∼95% significant 10% under- dense everywhere. One can directly compare this plot to Keenan’s using conversion d(M pc) = H0−1z = 4400 z M pc, and find that the two agree with each other pretty well. The gray curve is the inverted density contrast of the FRW model with parameters from Riess 2016 [3].

(44)

Figure 5.6: This plot shows the inverted density contrast of the best fit in the field 3, under different K0. The blue, green, red curves correspond to K0 = −0.1, 0, and 0.1 respectively.

(45)

Chapter 6

Quantization of Spacetime

In Riemannian geometry, the quadratic distance function g living on a manifold M de- scribes the structure of the tangent bundle T M uniquely through specifying the relation between quadratic line element ds2R and the coordinate difference dx. The quadratic na- ture of ds2R = gµνdxµdxν which implies Lorentz symmetry and Pythagorean theorem, is based on numerous experimental facts [10]. It is one of the foundations of GR and even when quantizing gravity, people usually promote it to its quantum version without mod- ification. But in both LQG and superstring theories, the existence of a minimal distance measure suggests that the infinitely-differentiable geometry may be an illusion that ceases to be valid at the smallest scale. This interesting consequence of combining GR with QFT leads to the notion that spacetime structure itself may have to be modified. H.S. Snyder coined this attempt in the name of “quantized space-time” in his seminal article published in 1947 [11].

Currently there are two major routes to tackle the quantization of spacetime. Dimen- sional reduction from a higher dimensional momentum space proposed by Snyder is a popular approach, which was followed by S. Majid, G. Amelino-Camelia, and others in the construction of their own versions of quantized spacetime [12, 15, 13]. The other route, first introduced by A. Connes [16, 17], comes from partial differential equation analysis on non-commutative space (C algebra), where fields and Fourier analysis can be defined classically, with twisted measure providing non-commutativity. This approach has been proven to be extremely versatile in the pursuit of quantized spacetime as non- commutativity usually plays an important role [18, 19, 20, 21].

Here we consider a new way of deforming spacetime algebra, first proposed by R.

Adler [22], which has its root in Clifford algebra of the tangent bundle. Instead of a bosonic structure where generators of momentum space serve as the coordinate measures, this theory treats the proper distance measure as the composition of infinite generators of Clifford structure on the tangent bundle of the position space. A continuous geodesic

(46)

therefore becomes piecewise-linear and so is the manifold. As we will show by carefully defining the measure, a QM system can be constructed on top of it.

6.1 Dimensional Reduction of Momentum Space

Snyder suggested [11] a deformation stemmed from the exponential map of a 4-d de Sitter (dS) space with ”radius” a−1 embedded inside a 5-d momentum space ( dk2

R1,4 = dk02− dk21 − dk22− dk23− dk42 ), i.e.,

a−2 = k20− k12− k22− k32− k24 . (6.1) Next, ˆxµ is chosen to be the momentum translation Killing vector such that it satisfies Lorentz symmetry. Then the deliberately chosen conformally flat hypersurface guaran- tees that the Lie derivatives of the Killing vectors must be proportional to the Lorentz transformation Killing vectors Jµν:

ˆ xµ = ia

 k4

∂kµ

+ kµ

∂k4



, (6.2)

[ˆxµ, ˆxν] = ia2µν = −a2

 kµ

∂kν − kν

∂kµ



, (6.3)

where kµ = ηµνkν, ηµν is the 4-d Minkowski metric and [ , ] is the commutator. The momentum coordinate is the exponential map,

ˆ

pµ= a−1kµ/k4 , (6.4)

on which the Lorentz transformation Killing vectors ˆJµν locally have the same form as the traditional Lorentz transformation generators ˆJµν = ˆxµν− ˆxνµ. Snyder’s approach therefore can be viewed as a non-abelian realization of the Lorentz group. The Heisenberg relation and the uncertainty relation are twisted accordingly to be

[ˆxµ, ˆpν] = i δµν − a2ηµλλν , (6.5)

∆xµ∆pν ≥ 1

2 δνµ− a2ηµλhˆpλνi . (6.6) As we will see in sec.7.4, our own version of quantized spacetime carries the same uncer- tainty relation as Snyder’s up to the first order in p2.

An weaker deformation, called κ-Poincar´e group, later invoked by G. Amelino-Camelia [15, 13] for the doubly special relativity, was first introduced by S. Majid and H. Ruegg [12]

as follows:

xi, xj = 0 , xi, x0 = κ−1xi, [pµ, pν] = 0 . (6.7)

數據

Figure 4.1: Work-flow of the fitting procedure. ∆m stands for difference between ob- ob-served magnitude and the predicted value from the FRW model with PLANCK 2016 parameters
Figure 5.1: This plot shows the sky map of all SNe and cepheids in our dataset. Three fields are specified in Keenan’s work [7] as the three regions with density contrast data.
Figure 5.2: This plot shows the 68% confidence band of the field 3 ∆m fit, along with the data points in this region
Figure 5.3: This plot shows the 68% confidence band of the field 1 ∆m fit, along with the data points in this region
+5

參考文獻

相關文件

Particularly, combining the numerical results of the two papers, we may obtain such a conclusion that the merit function method based on ϕ p has a better a global convergence and

massive gravity to Ho ř ava-Lifshitz Stochastic quantization and the discrete quantization scheme used for dimer model and crystal melting. are

Using this symmetry structure, one can easily prove that the z function automatically satisfies the vacuum condition of the W 1 + o~ algebra if it obeys the string

Using this formalism we derive an exact differential equation for the partition function of two-dimensional gravity as a function of the string coupling constant that governs the

The localization plays important role in supersymmetric (exact solvable) field theory. A special Wilson loop is also solvable by

◆ Understand the time evolutions of the matrix model to reveal the time evolution of string/gravity. ◆ Study the GGE and consider the application to string and

We compare the results of analytical and numerical studies of lattice 2D quantum gravity, where the internal quantum metric is described by random (dynamical)

The existence of cosmic-ray particles having such a great energy is of importance to astrophys- ics because such particles (believed to be atomic nuclei) have very great