metropolis-hastings's questions - Chinese 1answer

259 metropolis-hastings questions.

Gibbs sampling is a profound and popular technique for creating samples of Bayesian networks (BNs). Metropolis sampling is another popular technique, though - in my opinion - a less accessible method. ...

In a time series data generated by Monte Carlo Metropolis algorithm, when is the standard error (correlation between two data points is assumed to be negligible) is higher - when the change in the ...

There are different kinds of MCMC algorithms: Metropolis-Hastings Gibbs Importance/rejection sampling (related). Why would one use Gibbs sampling instead of Metropolis-Hastings? I suspect there ...

In the Metropolis–Hastings algorithm for sampling a target distribution, let: $\pi_{i}$ be the target density at state $i$, $\pi_j$ be the target density at the proposed state $j$, $h_{ij}$ be the ...

I have been trying to understand the Metropolis-Hastings algorithm in order to write a code for estimating the parameters of a model (i.e. $f(x)=a*x$). According to bibliography the Metropolis-...

I am running a Metropolis-Hastings MCMC to find the distribution of a parameter that takes real, positive values. I was considering using the truncated normal distribution, and was wondering if I have ...

I have time series generated data from Monte Carl-Metropolis Simulation. I have estimated correlation coefficients using: $r_k = \frac{c_k}{c_0}$ where $c_0$ is the varaiance and $c_k = \frac{1}{N}\...

I have time-series data generated via Metropolis algorithm - Monte Carlo simulations. Since these data must have some correlation between them, the formula of the standard error for IIDs variable must ...

I am trying to estimate a Bayesian Hierarchical model using the random-walk Metropolis-Hastings algorithm. While in a non-Hierarchical model, the algorithm is staight-forward, I am not sure I am ...

I am interested in generating samples from a density $\pi(\theta)$ to construct a histogram for $\pi(\theta)$ and to use these samples to generate samples of $f(\theta)$ for some function $f$. I may ...

I need to generate samples from a pdf given by $\frac{f_Z(z)\cdot 1_{Z \in B}}{P(Z \in B)}$ where $Z \in \mathbb{R}^d$ is a normal random vector with independent components. $Z \in B$ is a set that is ...

In the book Introducing Monte Carlo Methods by Casella and Robert, there's a sentence with which I'm having some trouble to understand. «If the domain explored in $q$ [proposal] is too small, ...

I have many states and have calculated a good custom proposal distribution for my Monte Carlo simulation. The system reaches a good solution faster than if it were to just use a randomly selected ...

I'd like to implement a version of Metropolis-adjusted Langevin sampling, but I'm unsure how to go about tuning the parameters of the proposal density. My understanding is that in MALA, a proposal ...

I am drawing a sample Y of size n from a p-dimensional Normal ($\mu, \Sigma$). Typically, p is 5. I have $\bar{Y}$ and $V = YY'$, the sum of squares. Now I want to draw samples from this $\bar{Y}$, ...

I'm having trouble understanding the algorithm as briefly described here, and I can't find the original paper by Mira since it seems to be from some obscure print journal (Metron Volume 59). The ...

I'm using the Delayed Rejection Adaptive Metropolis (DRAM) algorithm (Haario et al., 2006) for some Bayesian inference and trying to get an intuition for it so I can be sure to use it properly. So far ...

I'm trying to understand how to estimate the parameter vector $\mathbf{\theta} = (\theta_1,\theta_2, \theta_3)$ of a model using the MH algorithm. I am given a joint posterior density: $p(\mathbf{\...

In a metropolis hastings algorithm if i have not data or enough data, this will give me the prior means? I am asking this because I have made an algorithm and when i use just a few data this is not ...

I am studying the Metropolis-Hasting algorithm (from the book Understanding Computational Bayesian Statistics- Chap.6-7) in its two different formulations: Random Walk Candidate Density; Independent ...

I have three questions regarding the understanding behind and implementation of a noninformative prior for variance. I'm attempting to build a Metropolis sampler and I'm trying to sample from a ...

Let's say I have a $GARCH(2, 3)$ model with $$\nu_i = \sigma_i\epsilon_i$$ where $\epsilon_i \sim N(0, 1)$ and $$\sigma_i^2 = a_0 + \sum\limits_{k = 1}^{2} a_k\sigma_{i - k}^2 + \sum\limits_{l = 1}^{3}...

I have time-series data generated via Metropolis algorithm - Monte Carlo simulations. I need to know correlation between data points generated given by $r_k = c_k/c_0$ where $c_0$ is the variance of ...

The Metropolis-Hastings ratio is defined as $$ \alpha(x'|x) = \min\left(1, \frac{P(x')g(x|x')}{P(x)g(x'|x)}\right) $$ and the state $x'$ is accepted if $u \leq \alpha(x'|x)$, where $u$ is ...

I was going through the Stan documentation which can be downloaded from here. I was particularly interested in their implementation of the Gelman-Rubin diagnostic. The original paper Gelman & ...

I wanted to implement multinomial probit in Bayesian with random-walk Metropolis Hasting. To achieve the best numerical efficiency when drawing $\beta$, I need to use the hessian matrix of $\beta$. ...

what are the differences between M-H algorithm and M-H-within-Gibbs algorithm. If possible, upload for me the two algorithms please.

Additionally: Use a simple symmetric random walk as the proposal distribution. Source: "Introduction to Stochastic Processes with R" - Robert P. Dobrow, Chapter 5 Exercises: Question 5.6 I know this ...

Can the acceptance rate in MH algo be greater than 1? When that case occurs the proposal will off coruse be accepted with probability 1. But is it "ok" to allow a acceptance rate greater than 1?

Basic question about MCMC Metropolis–Hastings algorithm. I am trying to understand the Metropolis–Hastings algorithm and it's connection to Bayesian Analysis. Suppose I want to construct an MCMC MH ...

I have a proposal distribution for one parameter theta_guesstheta_guess = guessleft(theta_accept(1,r-1), 0.01,0)which is a ...

Why does the indicator function is equivalent to the integral over the Dirac mass? In my lecture notes the proof for the Kernel of the Metropolis Hastings is given as follows: $$P(X^t \in \mathcal{X}...

I came across the following simulation problem: given a set $\{\omega_1,\ldots,\omega_d\}$ of known real numbers, a distribution on $\{-1,1\}^d$ is defined by $$\mathbb{P}(X=(x_1,\ldots,x_d))\propto (...

I want to write a Metropolis sampler to sample independent rvs $x$ from the mixture model $X \sim \frac{1}{2}\big[\mathscr{N}(\mu_1, \sigma_1) + \mathscr{N}(\mu_2, \sigma_2)\big]$. My algorithm is ...

Intuitively, if I want to update two parameters in one step, I have to come up with a proposal that are good for both parameters. Assuming that the parameters are independent, is it correct to ...

For an ergodic Markov chain, it doesn't necessarily have to be $Detailed\ Balanced $ when it converges to stationary distribution, which means that: $\pi(\theta)\ P(\theta^{\prime}|\theta) \neq \pi(\...

Is this ok to choose the same proposal distribution as the prior in Metropolis algorithm? Perhaps it's a simple question and to me, it's totally fine but as I always see people choose different ...

I am using MCMC with the Metropolos-Hasting algorithm to generate solutions of a non linear regression problem. Likelihood My likelihood is a gaussian distribution centered in 0 of the residuals ...

I have just been doing some reading on Gibbs sampling and Metropolis Hastings algorithm and have a couple of questions. As I understand it, in the case of Gibbs sampling, if we have a large ...

I tried to simulate from a bivariate density $p(x,y)$ using Metropolis algorithms in R and had no luck. The density can be expressed as $p(y|x)p(x)$, where $p(x)$ is Singh-Maddala distribution $p(x)...

If one has to sample (with replacement) from a population $(x_1,x_2,\ldots)$ with weights $(\omega_1,\omega_2,\ldots)$, possibly infinite (although this is asking too much without further details), a ...

I want to speed up my R implementation of a Metropolis Hasting procedure by replacing the slow parts with functions written in Rcpp. There are already some examples online using Rcpp to speed up ...

Typically in Gibbs sampling we want to sample from a joint distribution $p(X_1, X_2, ..., X_N)$, but because the joint is hard to sample from directly, we instead achieve this by iteratively sampling ...

Consider a univariate normal model with mean $µ$ and variance $τ$ . Suppose we use a Beta(2,2) prior for $µ$ (somehow we know µ is between zero and one) and a $log-normal(1,10)$ prior for $τ$ (recall ...

Assume I have a function $g(x)$ that I want to integrate $$ \int_{-\infty}^\infty g(x) dx.$$ Of course assuming $g(x)$ goes to zero at the endpoints, no blowups, nice function. One way that I've been ...

I have been trying to learn MCMC methods and have come across Metropolis Hastings, Gibbs, Importance, and Rejection sampling. While some of these differences are obvious, i.e., how Gibbs is a special ...

I think I am confused due to the lax notation typically used when dealing with probabilities and not having a formal probability background. Bayes' Rule tells me that $$Pr(X_t=a|X_{t+1}=b)Pr(X_{t+1}=...

I have been trying to get a sense of the different problems in frequentist settings where MCMC is used. I am familiar that MCMC (or Monte Carlo) is used in fitting GLMMs and in maybe Monte Carlo EM ...

I'm runing MCMC using Metropolis-Hasting algorithm to fit an equation with 6 parameters on a dataset composed of 30 instances. How will the fact that my dataset is so small impact the posterio ...

Like I undestand MCMC sampling, the fulfillment of the detailed balance equation guarantees that our MC has reached its stationary distribution (given we ensure ergodicity). Detailed Balance is: $\...

Related tags

Hot questions

Language

Popular Tags