联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> CS作业CS作业

日期:2019-12-31 07:17


Take-home Final Project

Due day: Jan 8, 2020

December 16, 2019

 The ?rst question is to estimate the multinomial Probit Model (MNP): Suppose there

are n consumers in the market, i = 1; 2; :::; n. Each of them makes comsumption

decision according to her indirect utility of commodities and the consumer picks up

the commodity associated with largest indirect utilities. Let Xij = (Xij1; :::; Xijp)T

denote a vector of observed characteristics of commodity j for consumer i, e.g., priceij

is the trading price of j for consumer i. For simplicity, in this question we assume

Xij is scalar (p = 1). The indirect utility is assumed to be linearly separable, namely,

the (random) utility of i choosing j follows

Uij = 0j + 1jXij + uij

= Vij ( ) + uij

where Vij ( ) is the deterministic utility (towards researchers) and uij captures the

demand shock or unobserved evaluation of utilities of commodity j for consumer i

which is generally unknown to the researchers (but known to the consumers). In this

exercise, j = 0; 1; 2; 3, i.e., there are 4 commodities. For the normalization purpose,

we also assume Vi0 = 0, (0 is the outside choice).

According to the utility maximization, people choose commodity /j if it maximizes

their indirect utilities,

Yi = j i§ Uij > Ui;;j

The data observed for research are fYi

; Xigni=1 where Yi 2 f0; 1; 2; 3g Xi = fXijg3j=0.

For the choice behavior, speci?cally,

1.

Yi = 0 i§

ui0 > 01 + 11Xi1 + ui1 ui0 > 02 + 12Xi2 + ui2 ui0 > 03 + 13Xi3 + ui3 1

which is equivalently

24 1 1 0 0

)1 0 1 0

)1 0 0 1

35 | {z } M0 2664 ui0 ui1 ui2 ui3 3775 < < 24 01 + 11Xi1 02 + 12Xi2 03 + 13Xi3 35 | {z } `0(X; )

2.

Yi = 1 i§

01 + 11Xi1 + ui1 > ui0 01 + 11Xi1 + ui1 > 02 + 12Xi2 + ui2 01 + 11Xi1 + ui1 > 03 + 13Xi3 + ui3

which is equivalently

24 1 11 0 0

0 01 1 0

0 01 0 1

35 | {z } M1 2664 ui0 ui1 ui2 ui3 3775 < 24 01 + 11Xi1 01 1 02 + 11Xi1 1 12Xi2 01 1 03 + 11Xi1 1 13Xi3 35 | {z } `1(X; )

3.

Yi = 2 i§

02 + 12Xi2 + ui2 > ui0 02 + 12Xi2 + ui2 > 01 + 11Xi1 + ui1 02 + 12Xi2 + ui2 > 03 + 13Xi3 + ui3

which is equivalently

24

1 0 01 0

0 1 11 0

0 0 01 1

35 | {z } M2 2664 ui0 ui1 ui2 ui3 3775 < 24 02 + 12Xi2 02 2 01 + 12Xi2 2 11Xi1 02 2 03 + 12Xi2 2 13Xi3 35 | {z } `2(X; )

4.

Yi = 3 i§

03 + 13Xi3 + ui3 > ui0 03 + 13Xi3 + ui3 > 01 + 11Xi1 + ui1 03 + 13Xi3 + ui3 > 02 + 12Xi2 + ui2 2

which is equivalently

24

1 0 0 01

0 1 0 01

0 0 1 11 35 | {z } M3 2664 ui0 ui1 ui2 ui3 3775 < 24 03 + 13Xi3 03 3 01 + 13Xi3 3 11Xi1 03 3 02 + 13Xi3 3 12Xi2 35 | {z } `3(X; )

In Probit model, we further assume the ui = (ui0; ui1; ui2; ui3)T

are joint normal

identically for all i, i.e.,

ui  N (0;

)

where for the purpose of identi?cation of parameters (), the variance-covariance

matrix follows


=

2664

1 +  0 0 0

0 1 +  0 0

0 0 1 +  

0 0  1 +  3775

;  2 (0; 1)

and this covariance matrix captures the correlations among di§erent choices of com?modities. In this speci?cation, the unobserved characteristics of choice 2,3 are pos?itively correlated. Since ui

is normally distributed and Mju should also be joint

normal with covariance matrix

j =Var(Mju). Since all the observations are i.i.d.

draw from the above MNP. The likelihood function of the parameters  = bT ;

T

can be written as

Ln (jX; Y ) = Yni=1

Pr (M0u < `0 (Xi

; b)jXi)1fYi=0g Pr (M1ui < `1 (Xi

; b)jXi)1fYi=1g  Pr (M2ui < `2 (Xi

; b)jXi)1fYi=2g Pr (M3ui < `3 (Xi

; b)jXi)1fYi=3g = Yni=1



0 (`0 (Xi

; b))1fYi=0g 

1 (`1 (Xi

; b))1fYi=1g 

2 (`2 (Xi

; b))1fYi=2g 

3 (`3 (Xi

; b))1fYi=3g

where 

() is the CDF of multivariate normal distribution with 0 mean and covari?ance

. Therefore the MLE of  solves the following optimization problem

^ = arg max

2

log Ln (jX; Y ) (1)

(a) Simulate DGP: n = 500; Xij Unif[[2; 2] i.i.d. across i and j;  = 0:5;

i. 0j = 1 and 1j = 0:5 which are known to be identical across j (research

knows s are identical)

ii. 01 = 1 and 02 = 03 = 0:5; and 11 Unif[0; 1] and 12 = 13 Unif[0; 1]

(b) Specify

j =Var(Mju), j = 0; 1; 2; 3 and discuss of the identi?cation of  3

(c) In case (i), assume  is unknown, then estimate ( 0

; 1

; ) according to (1). The

maximization of log Ln (jX; Y ) can be implemented using pro?led procedures:

given   ^0 (); ^1 ()

= arg max

b0;b1

log Ln (b0; b1; jX; Y ) (2)

and then solve for  according to

^ = arg max

2(0;1)

log Ln  ^0 (); ^1 (); jX; Y 

case (ii), assume  is known to be 0:5 ( = 0:5) and you are required to solve

01; 02; 11 and 12 (since it is known that 02 = 03; 12 = 13) by

max

b01;b02;b11;b12

log Ln (b01; b02; b11; b12jX; Y )

Repeating drawing data from DGP as well as your estimation 100 times and

report the mean and standard deviation of your estimates of ( ; ).

Hints:

(a) The conditional choice probability (CCP), 

j (`j (Xi

; b)), should be evaluated

and calculated using GHK sampler (do NOT use computer package)

(b) In calculate the pro?led MLE, the inner loop of (2) could be conducted through

Nelder-Mead algorithm since the gradients of multivariate normal CDF wonít

be easily obtained. ^ could be estimated through line search in an interval (0; 1)

 Quasi-MCMC for Quantile Regression: Similar to the model we considered in class,

we aim to estimating the following quantile regression model

Yi = XTi (Ui)

For simplicity, X ? U Unif[0; 1] and we assume for any give x 2 X , quantile function

 :! xT ( ) is increasing in  , then

Pr Yi < XTi ( )jXi

= Pr XTi (Ui) < XTi ( )jXi

= Pr (Ui <  ) = 

that is the  -quantile function of Y given X is

Q (YijXi) = XTi ( )

The quantile regression can also be written as an additive model:

Y = X0 ( ) + X0 ( (U) ) ( ))

= X0 ( ) + " ( ) 4

and in median regression, write " is short for " (0:5) and similarly is short for

(0:5), so Yi = X0i + "i

. A typical example will be linear location-scale model:

suppose X Unif[0; 1] ? "  N (0; 1), Y = 0 + 1X + (1 + X) " = 0 + 1X + (1 + X) 1 (U) =  0 + 1 (U) +  1 + 1 (U) X

And ( ) can be obtained by minimizing a "check" loss function

( ) = arg min

b2B

E  Yi i X0ib

 (3)

where  (u) = (  1 fu  0g) u, when  = 0:5, 0:5 (u) / juj. Therefore, (3) teaches

us in the ?nite sample

^ ( ) = arg min

b2B

Xni=1

 Yi i X0ib

 (4)

For b 2 Rp

, de?ne residual ri (b) = Yi i XTi b, then

1n Xni=1

 Yi i X0ib

 (5)

= Z  (u) dFn (u; b)

where Fn (u; b) the empirical CDF of ri (b) Fn (u; b) = 1n Xni=1

1 fri (b) < ug

since both empirical CDF and 

is not smoothed, (Fernandes, Guerre & Horta, 2019)

considers a way of smoothing the Fn (u; b) which leads to a smoothed objective functions.

The idea is following:

1. Smooth Fn (u; b) by some kernel functions K Fnh (u; b) = Z u

?1

fh (t; b) dt

where

fh (t; b) = 1

nh

Xni=1

K t t ri (b) h 

and K is a symmetric density (kernel) function and h is the corresponding bandwidth

that shrinks to 0 as n ! 1. 5

2. Replace Fn (u; b) by Fnh (u; b) and rede?ne the objective function for ( ) and it can

be shown that

Z  (u) dFnh (u; b) (6)

= 1n Xni=1

`h Yi i XTi b

where

`h (u) = Z  (u) Ku (t t u) dt

which is so called Convolution-type smoothing of objective function (5)

3. If K (u) =  (u)-p.d.f. of N (0; 1), it can also be shown that

`h (ui) = 12E jZu;hj +   12

u; Zu;h  N 

u; h2 = 12

hG uh +   12 u

where

G (x) =  21=2

exp x22  + x (1 1 2 ((x));  is CDF of N (0; 1)

(a) (Fernandes, Guerre & Horta (2019), Journal of Business and Economic Statis?tics) Simulate the following DGP and estimate ( );  = 0:5 by minimizing

(6)

Y = X1 + X2  0:5 + 0:51 (U) + X3  0:5 + 0:51 (U)

+ 0:51 (U)

where U Unif[0; 1], X1  N (0; 2); X2 and X3 unif[0; 1], they are mutually

independent. Try two di§erent sample sizes n = 200; 400

The optimization can be implemented through Quasi-Newtonís methods or Gra?dient descending algorithm. Also repeating drawing data from same DGP as well

as your estimation 200 times and report the mean and standard deviation of your

estimates

(b) (Chernozhukov and Hong (2003), Journal of Econometrics) The typical quantile

regression could be directly obtained through minimizing (5). One standard

procedure is to use linear programming with inner-point iteration. While an

alternative method that deals with (5) is to simulate from its quasi-posterior

function using MCMC. De?ne the posterior density function of Ln (bjdata) / exp pXni=1

 Yi i X0ib! 6

and

p (bjdata) =  (b) exp ((Pni=1  (Yi i X0ib))

R  (b) exp ((Pni=1  (Yi i X0ib)) db

/  (b) exp pXni=1

 Yi i X0ib!

where  (b) is prior distribution of b which is assumed to be unif[[10; 10] and

^ = Z p (bjdata) db

calculate ^ through MCMC sampling from p (bjdata) (b1

; :::; bM) and report b

(average bc

; :::; bM; c is some positive number, e.g., c = 1000; M = 20000) after

some burn-in process (m > c). Please also plot your sampling path (b1

; :::; bM) (Hints: (random walk proposal) Using N ; 2

as proposal density, 2

is the

tuning parameter that could be adjusted during the sampling procedure). How

are the results if repeating MCMC 100 times with independent sampling from

DGP in (a)?

(c) (Optional) (Koenker (2005) Quantile Regression, Econometric Society Mono?graph Series) Estimate according to (4) using Linear programming with inte?rior point algorithm (Mehrotraís predictor-corrector method (1992)) and com?pare your results with (a)-(b). (Hints: A good reference for the computation

aspect of quantile is http://www.econ.uiuc.edu/~roger/research/rq/rq.html)

7


版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp