bayesian's questions - Chinese 1answer

4.442 bayesian questions.

In state space model, a system model with first order trend is represented as $$ x_{t} = x_{t-1} + e_{t}, $$ where $x_{t}$ is system model, $e_{t}$ is system noise. Also, a system model with ...

I have a problem for which I believe I should use the hypergeometric distribution, but I can't figure out how to do it in R. Say I have a bag of marbles with known number ($N$) of marbles, but the ...

I'm trying to understand how to estimate the parameter vector $\mathbf{\theta} = (\theta_1,\theta_2, \theta_3)$ of a model using the MH algorithm. I am given a joint posterior density: $p(\mathbf{\...

I am doing bayesian inference on normal data, and wonder how robust are the results with respects to some assumptions and potential mistakes. Specifically, for simplicity I want to assume that ...

I am trying to conceptualize the analytical relationship between the prior distribution and posterior distribution obtained by MCMC for Bayesian inference. Sorry for the non-rigorous notation but I ...

I am trying to make a scenario where three uniformly distributed parameters need to be constrained. ...

I have come across two definitions of 'Type 1 error' in dictionaries published by Oxford University Press: In hypothesis testing, the incorrect rejection of the null hypothesis when it is true. ...

Question 1 :- Is there any difference between Bayes Classifier and Naive Bayes Classifier ? Is there any fundamental difference ? I searched the web and unable to find a good solution. Question 2 :- ...

I am very new to GMM models and currently trying to make a unsupervised clustering on data. The data has 34 features (dimension). I am thinking of using GMM with Dirichlet process and trying to code ...

I am trying to learn bayesian data analysis, so what i see is that most computations are carried out using MCMC simulations. So as far i understand, for simulating MCMC we need to know the ...

In Bayesian probability, we think of having a prior distribution over a set of possible models, and updating this distribution every time we find new information. But with a neural network with ...

I have a Question about Bayesian Networks. I have a network with many parent nodes and one child node. I have the probabilities for the parents and for the child. The child node is binary, so there ...

I'm trying to understand the EM algorithm. I've found a tutorial on it. It goes like this: Two coins (A & B). 5 rounds of flipping 10 times. We forgot, however, which coin was flipped each round. ...

The Bayesian model evidence/ denominator of posterior sometimes could be way bigger than 1, which means the probability of observing a certain value under a hypothesis. How to interpret the meaning of ...

Model: The following model corresponds to samples drawn from a Gaussian distribution with unknown mean and unknown variance: \begin{align} x | \mu, \sigma^2 &\sim \mathcal{N}(\mu, \sigma^2 )\\ \...

Over time I've learned that many (most?) methods used in classical statistics can be interpreted as evaluating a Bayesian model in some plausible way while I find the standard explanations much less ...

Given a marginal Gaussian distribution for x and a conditional Gaussian distribution for y given x in the form $$p(x) = N(x|\mu, \Lambda^{-1})$$ $$p(y|x) = N(y|Ax + b, L^{-1})$$ the marginal ...

I have a very general question, any links to relevant papers or which book I should consult should suffice. So, let's say I've got a Bayesian model (for example) to predict the outcome of a soccer ...

So far, I have seen two ways of writing multinomial NB, I was wondering which would be the correct one to use in theory? Example: Suppose we are going to classify the sentence We are going ...

I am trying to use pymc to estimate parameters for a model. As I am not familiar with the methodology I first generated the data myself and tried to derive the posterior distribution using pymc. The ...

This is an extension to this question, which is about handling arbitrary (potentially unbounded) reward distributions for the multi-armed bandit problem. Given a sequence of observed rewards $r_t \in \...

I'm trying to understand the following sentence «Cross-validation and information criteria make a correction for using the data twice (in constructing the posterior and in model assessment) and ...

Some context: I want to explain a risk scores to students. These students are familiar with concepts such as “prevalence” of disease, where prevalence = (#people with disease) / (# people with ...

To be precise, I'm checking this presentation https://kaybrodersen.github.io/talks/Brodersen_2013_03_22.pdf, but I don't understand what is the connection between Laplace method and variational bayes? ...

In Deep Learning Chapter 5.6, Bayesian Linear Regression is introduced. I'm confused by the following formula: $$ p(w | X, Y) \propto P(Y | X, w) P(w)$$ $X$ is a sample vector input data. $Y$ is the ...

I'm trying to implement the Adaptive Metropolis-Hasting (AM) algorithm. I know there are many AM algorithms out there. The one I want to use is proposed by Haario et al. (2001) and later restructured ...

Could you inform me please, how can I calculate conditioned probability of several events? I have this example: How can I calculate P ( X2 | X4 ) and P ( X5, X3 | non( X4 ) ?

Referencing this question, I know that if $x_1$ and $x_2$ are conditionally independent given $y$ (big assumption), then $$P(y | x_1,x_2) = \frac{P(x_1,x_2 | y)P(y)}{P(x_2 | x_1)P(x_1)}$$ $$ = \frac{...

I am trying to understand the motivation behind Variational Bayes. I get that the posterior $p(z|x)$ can be intractable, when we would have to compute the evidence with $p(x) = \int p(x|z)p(z) \text{d}...

I wrote a small simulation in R wherein I simulate two slightly different groups that are normally distributed. I then simulated a series of ten identical experiments where 25 new participants are ...

Can anyone create a simple normal and a lognormal model to run with open/winbugs? I have a txt file with all the data I can send it by email.

The problem I want fit the model parameters of a simple 2-Gaussian mixture population. Given all the hype around Bayesian methods I want to understand if for this problem Bayesian inference is a ...

Formal problem: I'm given random variables $X\in\{0, 1\}$, $T \in \mathbb{R}$ and $S \in \{0, 1\}$ such that 1) $S$ and $T$ are independent given $X$, i.e. for both $x\in\{0, 1\}$: $$P(S\text{ and }...

I've seen Bayesian models specified as \begin{align*} Y_i|v_i &\overset{ind}{\sim} f_i(y_i|v_i),\\ v_i & \overset{ind}{\sim} g_i(v_i). \end{align*} My question is about the top line $Y_i|v_i\...

What are the definitions of semi-conjugate priors and of conditional conjugate priors? I found them in Gelman's Bayesian Data Analysis, but I couldn't find their definitions.

My outcome variable is a series of Bernoulli trials where some values are missing y $\in$ {0, 1, NA} How do you impute NA values for an outcome variable in rstan in the context of a GLM, assuming ...

Consider the posterior distribution $p(\theta|x)$. We aim to find a "good" estimate of the random variable $\theta$. The Bayes risk associated with the loss function $L(\hat{\theta}, \theta)$ is ...

I would like to compute the probability $P(X > Y)$ with R. I used JAGS to sample from the posterior distribution of each variable, so I have a Markov chain for each variable (of length $3\times 10^{...

Some sources say likelihood function is not conditional probability, some say it is. This is very confusing to me. According to most sources I have seen, the likelihood of a distribution with ...

Let $$X_i\sim \mathcal{N}(0,\sigma^2)$$ than we know that $$\sum_{i=1}^N\frac{X_i^2}{N}\sim\Gamma(\frac{N}{2},\frac{2\sigma^2}{N})$$ that the empirical variance follows a Gamma distribution. How do ...

I'm running the Bayesian version of Lee-Carter model on jags, using rjags R package. Given a matrix of data $M$ such that $M_{x,t}=\log m_x(t)$ where $m_x(t)$ is ...

In a group of students, there are 2 out of 18 that are left-handed. Find the posterior distribution of left-handed students in the population assuming uninformative prior. Summarize the results. ...

Show that the bayes classifier will achieve the best error rate, defined as: $$ E(f) = \int \int \mathbb{I}(y = f(x)) \cdot p(x, y) dxdy $$ where $$f(x)$$ is the classifier, and $$p(x, y)$$ is the ...

The question could also be: "estimating the true ability of each player", though I think that already implies some assumptions. In this paper I saw some references to Rankade and TrueSkill used by ...

I've got some data that looks like it is Gamma distributed. I've constructed the prior distribution from mean=232 and standard deviation = 150, which yield the Gamma distribution parameters: a_prior (...

Suppose we have multinomial distribution in which we have 4 categories, and each one is associated with a probability of being selected, say $\theta_i$, $i=1,..,4$. And I know for sure that $\...

I fail to understand how to define the prior distribution for a multinomial regression. In what unit the prior probability should be set given that the response don't really have units (but just ...

In the middle of page 64 of the third edition of Bayesian Data Analysis, Gelman writes... We saw in Chapter 2 that a sensibly vague prior for $\mu$ and $\sigma^2$, assuming prior indipendance of ...

I have come across these slides (slide # 16 & #17) in one of the online courses. The instructor was trying to explain how Maximum Posterior Estimate(MAP) is actually the solution $L(\theta) = \...

How can I implement Gibbs sampler for the posterior distribution, and estimating the marginal posterior distribution by making histogram?

Related tags

Hot questions

Language

Popular Tags